Yocto 2.4
Move OpenBMC to Yocto 2.4(rocko)
Tested: Built and verified Witherspoon and Palmetto images
Change-Id: I12057b18610d6fb0e6903c60213690301e9b0c67
Signed-off-by: Brad Bishop <bradleyb@fuzziesquirrel.com>
diff --git a/import-layers/yocto-poky/README b/import-layers/yocto-poky/README
deleted file mode 100644
index 9a52677..0000000
--- a/import-layers/yocto-poky/README
+++ /dev/null
@@ -1,58 +0,0 @@
-Poky
-====
-
-Poky is an integration of various components to form a complete prepackaged
-build system and development environment. It features support for building
-customised embedded device style images. There are reference demo images
-featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
-cross-architecture application development using QEMU emulation and a
-standalone toolchain and SDK with IDE integration.
-
-Additional information on the specifics of hardware that Poky supports
-is available in README.hardware. Further hardware support can easily be added
-in the form of layers which extend the systems capabilities in a modular way.
-
-As an integration layer Poky consists of several upstream projects such as
-BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information
-e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
-
-The Yocto Project has extensive documentation about the system including a
-reference manual which can be found at:
- http://yoctoproject.org/documentation
-
-OpenEmbedded-Core is a layer containing the core metadata for current versions
-of OpenEmbedded. It is distro-less (can build a functional image with
-DISTRO = "nodistro") and contains only emulated machine support.
-
-For information about OpenEmbedded, see the OpenEmbedded website:
- http://www.openembedded.org/
-
-Where to Send Patches
-=====================
-
-As Poky is an integration repository (built using a tool called combo-layer),
-patches against the various components should be sent to their respective
-upstreams:
-
-bitbake:
- Git repository: http://git.openembedded.org/bitbake/
- Mailing list: bitbake-devel@lists.openembedded.org
-
-documentation:
- Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
- Mailing list: yocto@yoctoproject.org
-
-meta-poky, meta-yocto-bsp:
- Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
- Mailing list: poky@yoctoproject.org
-
-Everything else should be sent to the OpenEmbedded Core mailing list. If in
-doubt, check the oe-core git repository for the content you intend to modify.
-Before sending, be sure the patches apply cleanly to the current oe-core git
-repository.
-
- Git repository: http://git.openembedded.org/openembedded-core/
- Mailing list: openembedded-core@lists.openembedded.org
-
-Note: The scripts directory should be treated with extra care as it is a mix of
-oe-core and poky-specific files.
diff --git a/import-layers/yocto-poky/README.LSB b/import-layers/yocto-poky/README.LSB
new file mode 100644
index 0000000..c9dca3f
--- /dev/null
+++ b/import-layers/yocto-poky/README.LSB
@@ -0,0 +1,25 @@
+OE-Core aims to be able to provide basic LSB compatible images. There
+are some challenges for OE as LSB isn't always 100% relevant to its
+target embedded and IoT audiences.
+
+One challenge is that the LSB spec is no longer being actively
+developed [https://github.com/LinuxStandardBase/lsb] and has
+components which are end of life or significantly dated. OE
+therefore provides compatibility with the following caveats:
+
+* Qt4 is provided by the separate meta-qt4 layer. Its noted that Qt4
+ is end of life and this isn't something the core project regularly
+ tests any longer. Users are recommended to group together to support
+ maintenance of that layer. [http://git.yoctoproject.org/cgit/cgit.cgi/meta-qt4/]
+
+* mailx has been dropped since its no longer being developed upstream
+ and there are better, more modern replacements such as s-nail
+ (http://sdaoden.eu/code.html) or mailutils (http://mailutils.org/).
+
+* A few perl modules that were required by LSB 4.x aren't provided:
+ libclass-isa, libenv, libdumpvalue, libfile-checktree,
+ libi18n-collate, libpod-plainer.
+
+* libpng 1.2 isn't provided; oe-core includes the latest release of libpng
+ instead.
+
diff --git a/import-layers/yocto-poky/README.hardware b/import-layers/yocto-poky/README.hardware
deleted file mode 100644
index e6ccf78..0000000
--- a/import-layers/yocto-poky/README.hardware
+++ /dev/null
@@ -1,429 +0,0 @@
- Poky Hardware README
- ====================
-
-This file gives details about using Poky with the reference machines
-supported out of the box. A full list of supported reference target machines
-can be found by looking in the following directories:
-
- meta/conf/machine/
- meta-yocto-bsp/conf/machine/
-
-If you are in doubt about using Poky/OpenEmbedded with your hardware, consult
-the documentation for your board/device.
-
-Support for additional devices is normally added by creating BSP layers - for
-more information please see the Yocto Board Support Package (BSP) Developer's
-Guide - documentation source is in documentation/bspguide or download the PDF
-from:
-
- http://yoctoproject.org/documentation
-
-Support for physical reference hardware has now been split out into a
-meta-yocto-bsp layer which can be removed separately from other layers if not
-needed.
-
-
-QEMU Emulation Targets
-======================
-
-To simplify development, the build system supports building images to
-work with the QEMU emulator in system emulation mode. Several architectures
-are currently supported:
-
- * ARM (qemuarm)
- * x86 (qemux86)
- * x86-64 (qemux86-64)
- * PowerPC (qemuppc)
- * MIPS (qemumips)
-
-Use of the QEMU images is covered in the Yocto Project Reference Manual.
-The appropriate MACHINE variable value corresponding to the target is given
-in brackets.
-
-
-Hardware Reference Boards
-=========================
-
-The following boards are supported by the meta-yocto-bsp layer:
-
- * Texas Instruments Beaglebone (beaglebone)
- * Freescale MPC8315E-RDB (mpc8315e-rdb)
-
-For more information see the board's section below. The appropriate MACHINE
-variable value corresponding to the board is given in brackets.
-
-Reference Board Maintenance
-===========================
-
-Send pull requests, patches, comments or questions about meta-yocto-bsps to poky@yoctoproject.org
-
-Maintainers: Kevin Hao <kexin.hao@windriver.com>
- Bruce Ashfield <bruce.ashfield@windriver.com>
-
-Consumer Devices
-================
-
-The following consumer devices are supported by the meta-yocto-bsp layer:
-
- * Intel x86 based PCs and devices (genericx86)
- * Ubiquiti Networks EdgeRouter Lite (edgerouter)
-
-For more information see the device's section below. The appropriate MACHINE
-variable value corresponding to the device is given in brackets.
-
-
-
- Specific Hardware Documentation
- ===============================
-
-
-Intel x86 based PCs and devices (genericx86*)
-=============================================
-
-The genericx86 and genericx86-64 MACHINE are tested on the following platforms:
-
-Intel Xeon/Core i-Series:
- + Intel NUC5 Series - ix-52xx Series SOC (Broadwell)
- + Intel NUC6 Series - ix-62xx Series SOC (Skylake)
- + Intel Shumway Xeon Server
-
-Intel Atom platforms:
- + MinnowBoard MAX - E3825 SOC (Bay Trail)
- + MinnowBoard MAX - Turbot (ADI Engineering) - E3826 SOC (Bay Trail)
- - These boards can be either 32bot or 64bit modes depending on firmware
- - See minnowboard.org for details
- + Intel Braswell SOC
-
-and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
-type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
-addition to common PC input devices, busses, and so on.
-
-Depending on the device, it can boot from a traditional hard-disk, a USB device,
-or over the network. Writing generated images to physical media is
-straightforward with a caveat for USB devices. The following examples assume the
-target boot device is /dev/sdb, be sure to verify this and use the correct
-device as the following commands are run as root and are not reversable.
-
-USB Device:
- 1. Build a live image. This image type consists of a simple filesystem
- without a partition table, which is suitable for USB keys, and with the
- default setup for the genericx86 machine, this image type is built
- automatically for any image you build. For example:
-
- $ bitbake core-image-minimal
-
- 2. Use the "dd" utility to write the image to the raw block device. For
- example:
-
- # dd if=core-image-minimal-genericx86.hddimg of=/dev/sdb
-
- If the device fails to boot with "Boot error" displayed, or apparently
- stops just after the SYSLINUX version banner, it is likely the BIOS cannot
- understand the physical layout of the disk (or rather it expects a
- particular layout and cannot handle anything else). There are two possible
- solutions to this problem:
-
- 1. Change the BIOS USB Device setting to HDD mode. The label will vary by
- device, but the idea is to force BIOS to read the Cylinder/Head/Sector
- geometry from the device.
-
- 2. Use a ".wic" image with an EFI partition
-
- a) With a default grub-efi bootloader:
- # dd if=core-image-minimal-genericx86-64.wic of=/dev/sdb
-
- b) Use systemd-boot instead
- - Build an image with EFI_PROVIDER="systemd-boot" then use the above
- dd command to write the image to a USB stick.
-
-
-Texas Instruments Beaglebone (beaglebone)
-=========================================
-
-The Beaglebone is an ARM Cortex-A8 development board with USB, Ethernet, 2D/3D
-accelerated graphics, audio, serial, JTAG, and SD/MMC. The Black adds a faster
-CPU, more RAM, eMMC flash and a micro HDMI port. The beaglebone MACHINE is
-tested on the following platforms:
-
- o Beaglebone Black A6
- o Beaglebone A6 (the original "White" model)
-
-The Beaglebone Black has eMMC, while the White does not. Pressing the USER/BOOT
-button when powering on will temporarily change the boot order. But for the sake
-of simplicity, these instructions assume you have erased the eMMC on the Black,
-so its boot behavior matches that of the White and boots off of SD card. To do
-this, issue the following commands from the u-boot prompt:
-
- # mmc dev 1
- # mmc erase 0 512
-
-To further tailor these instructions for your board, please refer to the
-documentation at http://www.beagleboard.org/bone and http://www.beagleboard.org/black
-
-From a Linux system with access to the image files perform the following steps:
-
- 1. Build an image. For example:
-
- $ bitbake core-image-minimal
-
- 2. Use the "dd" utility to write the image to the SD card. For example:
-
- # dd core-image-minimal-beaglebone.wic of=/dev/sdb
-
- 3. Insert the SD card into the Beaglebone and boot the board.
-
-Freescale MPC8315E-RDB (mpc8315e-rdb)
-=====================================
-
-The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
-software development of network attached storage (NAS) and digital media server
-applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
-includes a built-in security accelerator.
-
-(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
-same board in an enclosure with accessories. In any case it is fully
-compatible with the instructions given here.)
-
-Setup instructions
-------------------
-
-You will need the following:
-* NFS root setup on your workstation
-* TFTP server installed on your workstation
-* Straight-thru 9-conductor serial cable (DB9, M/F) connected from your
- PC to UART1
-* Ethernet connected to the first ethernet port on the board
-
---- Preparation ---
-
-Note: if you have altered your board's ethernet MAC address(es) from the
-defaults, or you need to do so because you want multiple boards on the same
-network, then you will need to change the values in the dts file (patch
-linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
-you have left them at the factory default then you shouldn't need to do
-anything here.
-
-Note: To boot from USB disk you need u-boot that supports 'ext2load usb'
-command. You need to setup TFTP server, load u-boot from there and
-flash it to NOR flash.
-
-Beware! Flashing bootloader is potentially dangerous operation that can
-brick your device if done incorrectly. Please, make sure you understand
-what below commands mean before executing them.
-
-Load the new u-boot.bin from TFTP server to memory address 200000
-=> tftp 200000 u-boot.bin
-
-Disable flash protection
-=> protect off all
-
-Erase the old u-boot from fe000000 to fe06ffff in NOR flash.
-The size is 0x70000 (458752 bytes)
-=> erase fe000000 fe06ffff
-
-Copy the new u-boot from address 200000 to fe000000
-the size is 0x70000. It has to be greater or equal to u-boot.bin size
-=> cp.b 200000 fe000000 70000
-
-Enable flash protection again
-=> protect on all
-
-Reset the board
-=> reset
-
---- Booting from USB disk ---
-
- 1. Flash partitioned image to the USB disk
-
- # dd if=core-image-minimal-mpc8315e-rdb.wic of=/dev/sdb
-
- 2. Plug USB disk into the MPC8315 board
-
- 3. Connect the board's first serial port to your workstation and then start up
- your favourite serial terminal so that you will be able to interact with
- the serial console. If you don't have a favourite, picocom is suggested:
-
- $ picocom /dev/ttyUSB0 -b 115200
-
- 4. Power up or reset the board and press a key on the terminal when prompted
- to get to the U-Boot command line
-
- 5. Optional. Load the u-boot.bin from the USB disk:
-
- => usb start
- => ext2load usb 0:1 200000 u-boot.bin
-
- and flash it to NOR flash as described above.
-
- 6. Set fdtaddr and loadaddr. This is not necessary if you set them before.
-
- => setenv fdtaddr a00000
- => setenv loadaddr 1000000
-
- 7. Load the kernel and dtb from first partition of the USB disk:
-
- => usb start
- => ext2load usb 0:1 $loadaddr uImage
- => ext2load usb 0:1 $fdtaddr dtb
-
- 8. Set bootargs and boot up the device
-
- => setenv bootargs root=/dev/sdb2 rw rootwait console=ttyS0,115200
- => bootm $loadaddr - $fdtaddr
-
-
---- Booting from NFS root ---
-
-Load the kernel and dtb (device tree blob), and boot the system as follows:
-
- 1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
- files from the tmp/deploy directory, and make them available on your TFTP
- server.
-
- 2. Connect the board's first serial port to your workstation and then start up
- your favourite serial terminal so that you will be able to interact with
- the serial console. If you don't have a favourite, picocom is suggested:
-
- $ picocom /dev/ttyUSB0 -b 115200
-
- 3. Power up or reset the board and press a key on the terminal when prompted
- to get to the U-Boot command line
-
- 4. Set up the environment in U-Boot:
-
- => setenv ipaddr <board ip>
- => setenv serverip <tftp server ip>
- => setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
-
- 5. Download the kernel and dtb, and boot:
-
- => tftp 1000000 uImage-mpc8315e-rdb.bin
- => tftp 2000000 uImage-mpc8315e-rdb.dtb
- => bootm 1000000 - 2000000
-
---- Booting from JFFS2 root ---
-
- 1. First boot the board with NFS root.
-
- 2. Erase the MTD partition which will be used as root:
-
- $ flash_eraseall /dev/mtd3
-
- 3. Copy the JFFS2 image to the MTD partition:
-
- $ flashcp core-image-minimal-mpc8315e-rdb.jffs2 /dev/mtd3
-
- 4. Then reboot the board and set up the environment in U-Boot:
-
- => setenv bootargs root=/dev/mtdblock3 rootfstype=jffs2 console=ttyS0,115200
-
-
-Ubiquiti Networks EdgeRouter Lite (edgerouter)
-==============================================
-
-The EdgeRouter Lite is part of the EdgeMax series. It is a MIPS64 router
-(based on the Cavium Octeon processor) with 512MB of RAM, which uses an
-internal USB pendrive for storage.
-
-Setup instructions
-------------------
-
-You will need the following:
-* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
- port on the device
-* Ethernet connected to the first ethernet port on the board
-
-If using NFS as part of the setup process, you will also need:
-* NFS root setup on your workstation
-* TFTP server installed on your workstation (if fetching the kernel from
- TFTP, see below).
-
---- Preparation ---
-
-Build an image (e.g. core-image-minimal) using "edgerouter" as the MACHINE.
-In the following instruction it is based on core-image-minimal. Another target
-may be similiar with it.
-
---- Booting from NFS root / kernel via TFTP ---
-
-Load the kernel, and boot the system as follows:
-
- 1. Get the kernel (vmlinux) file from the tmp/deploy/images/edgerouter
- directory, and make them available on your TFTP server.
-
- 2. Connect the board's first serial port to your workstation and then start up
- your favourite serial terminal so that you will be able to interact with
- the serial console. If you don't have a favourite, picocom is suggested:
-
- $ picocom /dev/ttyS0 -b 115200
-
- 3. Power up or reset the board and press a key on the terminal when prompted
- to get to the U-Boot command line
-
- 4. Set up the environment in U-Boot:
-
- => setenv ipaddr <board ip>
- => setenv serverip <tftp server ip>
-
- 5. Download the kernel and boot:
-
- => tftp tftp $loadaddr vmlinux
- => bootoctlinux $loadaddr coremask=0x3 root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:<netmask>:edgerouter:eth0:off mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
-
---- Booting from USB disk ---
-
-To boot from the USB disk, you either need to remove it from the edgerouter
-box and populate it from another computer, or use a previously booted NFS
-image and populate from the edgerouter itself.
-
-Type 1: Use partitioned image
------------------------------
-
-Steps:
-
- 1. Remove the USB disk from the edgerouter and insert it into a computer
- that has access to your build artifacts.
-
- 2. Flash the image.
-
- # dd if=core-image-minimal-edgerouter.wic of=/dev/sdb
-
- 3. Insert USB disk into the edgerouter and boot it.
-
-Type 2: NFS
------------
-
-Note: If you place the kernel on the ext3 partition, you must re-create the
- ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
- cannot read the partition otherwise.
-
- These boot instructions assume that you have recreated the ext3 filesystem with
- 128 byte inodes, you have an updated uboot or you are running and image capable
- of making the filesystem on the board itself.
-
-
- 1. Boot from NFS root
-
- 2. Mount the USB disk partition 2 and then extract the contents of
- tmp/deploy/core-image-XXXX.tar.bz2 into it.
-
- Before starting, copy core-image-minimal-xxx.tar.bz2 and vmlinux into
- rootfs path on your workstation.
-
- and then,
-
- # mount /dev/sda2 /media/sda2
- # tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /media/sda2
- # cp vmlinux /media/sda2/boot/vmlinux
- # umount /media/sda2
- # reboot
-
- 3. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
- command line:
-
- # reboot
-
- 4. Load the kernel and boot:
-
- => ext2load usb 0:2 $loadaddr boot/vmlinux
- => bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
diff --git a/import-layers/yocto-poky/README.hardware b/import-layers/yocto-poky/README.hardware
new file mode 120000
index 0000000..8b6258d
--- /dev/null
+++ b/import-layers/yocto-poky/README.hardware
@@ -0,0 +1 @@
+meta-yocto-bsp/README.hardware
\ No newline at end of file
diff --git a/import-layers/yocto-poky/README.poky b/import-layers/yocto-poky/README.poky
new file mode 120000
index 0000000..1877dca
--- /dev/null
+++ b/import-layers/yocto-poky/README.poky
@@ -0,0 +1 @@
+meta-poky/README.poky
\ No newline at end of file
diff --git a/import-layers/yocto-poky/README.qemu b/import-layers/yocto-poky/README.qemu
new file mode 100644
index 0000000..9f56b7d
--- /dev/null
+++ b/import-layers/yocto-poky/README.qemu
@@ -0,0 +1,15 @@
+QEMU Emulation Targets
+======================
+
+To simplify development, the build system supports building images to
+work with the QEMU emulator in system emulation mode. Several architectures
+are currently supported in 32 and 64 bit variants:
+
+ * ARM (qemuarm + qemuarm64)
+ * x86 (qemux86 + qemux86-64)
+ * PowerPC (qemuppc only)
+ * MIPS (qemumips + qemumips64)
+
+Use of the QEMU images is covered in the Yocto Project Reference Manual.
+The appropriate MACHINE variable value corresponding to the target is given
+in brackets.
diff --git a/import-layers/yocto-poky/bitbake/README b/import-layers/yocto-poky/bitbake/README
new file mode 100644
index 0000000..479c376
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/README
@@ -0,0 +1,35 @@
+Bitbake
+=======
+
+BitBake is a generic task execution engine that allows shell and Python tasks to be run
+efficiently and in parallel while working within complex inter-task dependency constraints.
+One of BitBake's main users, OpenEmbedded, takes this core and builds embedded Linux software
+stacks using a task-oriented approach.
+
+For information about Bitbake, see the OpenEmbedded website:
+ http://www.openembedded.org/
+
+Bitbake plain documentation can be found under the doc directory or its integrated
+html version at the Yocto Project website:
+ http://yoctoproject.org/documentation
+
+Contributing
+------------
+
+Please refer to
+http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
+for guidelines on how to submit patches, just note that the latter documentation is intended
+for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
+but in general main guidelines apply. Once the commit(s) have been created, the way to send
+the patch is through git-send-email. For example, to send the last commit (HEAD) on current
+branch, type:
+
+ git send-email -M -1 --to bitbake-devel@lists.openembedded.org
+
+Mailing list:
+
+ http://lists.openembedded.org/mailman/listinfo/bitbake-devel
+
+Source code:
+
+ http://git.openembedded.org/bitbake/
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake b/import-layers/yocto-poky/bitbake/bin/bitbake
index 9f5c2d4..3acc23a 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake
@@ -36,9 +36,9 @@
from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
if sys.getfilesystemencoding() != "utf-8":
- sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
+ sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
-__version__ = "1.34.0"
+__version__ = "1.36.0"
if __name__ == "__main__":
if __version__ != bb.__version__:
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs b/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs
index eb2f859..4e6bbdd 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs
@@ -34,18 +34,39 @@
logger = bb.msg.logger_create('bitbake-diffsigs')
+def find_siginfo(tinfoil, pn, taskname, sigs=None):
+ result = None
+ tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
+ 'logging.LogRecord',
+ 'bb.command.CommandCompleted',
+ 'bb.command.CommandFailed'])
+ ret = tinfoil.run_command('findSigInfo', pn, taskname, sigs)
+ if ret:
+ while True:
+ event = tinfoil.wait_event(1)
+ if event:
+ if isinstance(event, bb.command.CommandCompleted):
+ break
+ elif isinstance(event, bb.command.CommandFailed):
+ logger.error(str(event))
+ sys.exit(2)
+ elif isinstance(event, bb.event.FindSigInfoResult):
+ result = event.result
+ elif isinstance(event, logging.LogRecord):
+ logger.handle(event)
+ else:
+ logger.error('No result returned from findSigInfo command')
+ sys.exit(2)
+ return result
+
def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False):
""" Find the most recent signature files for the specified PN/task and compare them """
- if not hasattr(bb.siggen, 'find_siginfo'):
- logger.error('Metadata does not support finding signature data files')
- sys.exit(1)
-
if not taskname.startswith('do_'):
taskname = 'do_%s' % taskname
if sig1 and sig2:
- sigfiles = bb.siggen.find_siginfo(pn, taskname, [sig1, sig2], bbhandler.config_data)
+ sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
if len(sigfiles) == 0:
logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
sys.exit(1)
@@ -57,7 +78,7 @@
sys.exit(1)
latestfiles = [sigfiles[sig1], sigfiles[sig2]]
else:
- filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
+ filedates = find_siginfo(bbhandler, pn, taskname)
latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-3:]
if not latestfiles:
logger.error('No sigdata files found matching %s %s' % (pn, taskname))
@@ -69,7 +90,7 @@
# Define recursion callback
def recursecb(key, hash1, hash2):
hashes = [hash1, hash2]
- hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
+ hashfiles = find_siginfo(bbhandler, key, None, hashes)
recout = []
if len(hashfiles) == 0:
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-layers b/import-layers/yocto-poky/bitbake/bin/bitbake-layers
index 2b05d28..d184011 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-layers
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-layers
@@ -43,6 +43,7 @@
add_help=False)
parser.add_argument('-d', '--debug', help='Enable debug output', action='store_true')
parser.add_argument('-q', '--quiet', help='Print only errors', action='store_true')
+ parser.add_argument('-F', '--force', help='Force add without recipe parse verification', action='store_true')
parser.add_argument('--color', choices=['auto', 'always', 'never'], default='auto', help='Colorize output (where %(metavar)s is %(choices)s)', metavar='COLOR')
global_args, unparsed_args = parser.parse_known_args()
@@ -89,7 +90,7 @@
if getattr(args, 'parserecipes', False):
tinfoil.config_data.disableTracking()
- tinfoil.parseRecipes()
+ tinfoil.parse_recipes()
tinfoil.config_data.enableTracking()
return args.func(args)
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-selftest b/import-layers/yocto-poky/bitbake/bin/bitbake-selftest
index 380e003..afe1603 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-selftest
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-selftest
@@ -28,6 +28,7 @@
tests = ["bb.tests.codeparser",
"bb.tests.cow",
"bb.tests.data",
+ "bb.tests.event",
"bb.tests.fetch",
"bb.tests.parse",
"bb.tests.utils"]
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-worker b/import-layers/yocto-poky/bitbake/bin/bitbake-worker
index ee2d622..e925054 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-worker
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-worker
@@ -17,7 +17,7 @@
from threading import Thread
if sys.getfilesystemencoding() != "utf-8":
- sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
+ sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
# Users shouldn't be running this code directly
if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -499,4 +499,3 @@
workerlog_write("exitting")
sys.exit(0)
-
diff --git a/import-layers/yocto-poky/bitbake/bin/git-make-shallow b/import-layers/yocto-poky/bitbake/bin/git-make-shallow
new file mode 100755
index 0000000..296d3a3
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/bin/git-make-shallow
@@ -0,0 +1,165 @@
+#!/usr/bin/env python3
+"""git-make-shallow: make the current git repository shallow
+
+Remove the history of the specified revisions, then optionally filter the
+available refs to those specified.
+"""
+
+import argparse
+import collections
+import errno
+import itertools
+import os
+import subprocess
+import sys
+
+version = 1.0
+
+
+def main():
+ if sys.version_info < (3, 4, 0):
+ sys.exit('Python 3.4 or greater is required')
+
+ git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
+ shallow_file = os.path.join(git_dir, 'shallow')
+ if os.path.exists(shallow_file):
+ try:
+ check_output(['git', 'fetch', '--unshallow'])
+ except subprocess.CalledProcessError:
+ try:
+ os.unlink(shallow_file)
+ except OSError as exc:
+ if exc.errno != errno.ENOENT:
+ raise
+
+ args = process_args()
+ revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
+
+ make_shallow(shallow_file, args.revisions, args.refs)
+
+ ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
+ remaining_history = set(revs) & set(ref_revs)
+ for rev in remaining_history:
+ if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
+ sys.exit('Error: %s was not made shallow' % rev)
+
+ filter_refs(args.refs)
+
+ if args.shrink:
+ shrink_repo(git_dir)
+ subprocess.check_call(['git', 'fsck', '--unreachable'])
+
+
+def process_args():
+ # TODO: add argument to automatically keep local-only refs, since they
+ # can't be easily restored with a git fetch.
+ parser = argparse.ArgumentParser(description='Remove the history of the specified revisions, then optionally filter the available refs to those specified.')
+ parser.add_argument('--ref', '-r', metavar='REF', action='append', dest='refs', help='remove all but the specified refs (cumulative)')
+ parser.add_argument('--shrink', '-s', action='store_true', help='shrink the git repository by repacking and pruning')
+ parser.add_argument('revisions', metavar='REVISION', nargs='+', help='a git revision/commit')
+ if len(sys.argv) < 2:
+ parser.print_help()
+ sys.exit(2)
+
+ args = parser.parse_args()
+
+ if args.refs:
+ args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
+ else:
+ args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
+
+ args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
+ args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
+ return args
+
+
+def check_output(cmd, input=None):
+ return subprocess.check_output(cmd, universal_newlines=True, input=input)
+
+
+def make_shallow(shallow_file, revisions, refs):
+ """Remove the history of the specified revisions."""
+ for rev in follow_history_intersections(revisions, refs):
+ print("Processing %s" % rev)
+ with open(shallow_file, 'a') as f:
+ f.write(rev + '\n')
+
+
+def get_all_refs(ref_filter=None):
+ """Return all the existing refs in this repository, optionally filtering the refs."""
+ ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
+ ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
+ if ref_filter:
+ ref_split = (e for e in ref_split if ref_filter(*e))
+ refs = [r[0] for r in ref_split]
+ return refs
+
+
+def iter_extend(iterable, length, obj=None):
+ """Ensure that iterable is the specified length by extending with obj."""
+ return itertools.islice(itertools.chain(iterable, itertools.repeat(obj)), length)
+
+
+def filter_refs(refs):
+ """Remove all but the specified refs from the git repository."""
+ all_refs = get_all_refs()
+ to_remove = set(all_refs) - set(refs)
+ if to_remove:
+ check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
+ input=''.join(l + '\0' for l in to_remove))
+
+
+def follow_history_intersections(revisions, refs):
+ """Determine all the points where the history of the specified revisions intersects the specified refs."""
+ queue = collections.deque(revisions)
+ seen = set()
+
+ for rev in iter_except(queue.popleft, IndexError):
+ if rev in seen:
+ continue
+
+ parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
+
+ yield rev
+ seen.add(rev)
+
+ if not parents:
+ continue
+
+ check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
+ for parent in parents:
+ for ref in check_refs:
+ print("Checking %s vs %s" % (parent, ref))
+ try:
+ merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
+ except subprocess.CalledProcessError:
+ continue
+ else:
+ queue.append(merge_base)
+
+
+def iter_except(func, exception, start=None):
+ """Yield a function repeatedly until it raises an exception."""
+ try:
+ if start is not None:
+ yield start()
+ while True:
+ yield func()
+ except exception:
+ pass
+
+
+def shrink_repo(git_dir):
+ """Shrink the newly shallow repository, removing the unreachable objects."""
+ subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
+ subprocess.check_call(['git', 'repack', '-ad'])
+ try:
+ os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
+ except OSError as exc:
+ if exc.errno != errno.ENOENT:
+ raise
+ subprocess.check_call(['git', 'prune', '--expire', 'now'])
+
+
+if __name__ == '__main__':
+ main()
diff --git a/import-layers/yocto-poky/bitbake/bin/toaster b/import-layers/yocto-poky/bitbake/bin/toaster
index 61a4a0f..4036f0a 100755
--- a/import-layers/yocto-poky/bitbake/bin/toaster
+++ b/import-layers/yocto-poky/bitbake/bin/toaster
@@ -18,12 +18,21 @@
# along with this program. If not, see http://www.gnu.org/licenses/.
HELP="
-Usage: source toaster start|stop [webport=<address:port>] [noweb]
+Usage: source toaster start|stop [webport=<address:port>] [noweb] [nobuild]
Optional arguments:
- [noweb] Setup the environment for building with toaster but don't start the development server
+ [nobuild] Setup the environment for capturing builds with toaster but disable managed builds
+ [noweb] Setup the environment for capturing builds with toaster but don't start the web server
[webport] Set the development server (default: localhost:8000)
"
+custom_extention()
+{
+ custom_extension=$BBBASEDIR/lib/toaster/orm/fixtures/custom_toaster_append.sh
+ if [ -f $custom_extension ] ; then
+ $custom_extension $*
+ fi
+}
+
databaseCheck()
{
retval=0
@@ -50,6 +59,11 @@
webserverKillAll()
{
local pidfile
+ if [ -f ${BUILDDIR}/.toastermain.pid ] ; then
+ custom_extention web_stop_postpend
+ else
+ custom_extention noweb_stop_postpend
+ fi
for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
if [ -f ${pidfile} ]; then
pid=`cat ${pidfile}`
@@ -89,6 +103,7 @@
else
echo "Toaster development webserver started at http://$ADDR_PORT"
echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
+ custom_extention web_start_postpend $ADDR_PORT
fi
return $retval
@@ -116,8 +131,14 @@
# Verify Django version
reqfile=$(python3 -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
- exp=$exp'import sys,django;version=django.get_version().split(".");'
- exp=$exp'sys.exit(not (version \1 "\2".split(".") and version \3 "\4".split(".")))/p'
+ # expand version parts to 2 digits to support 1.10.x > 1.8
+ # (note:helper functions hard to insert in-line)
+ exp=$exp'import sys,django;'
+ exp=$exp'version=["%02d" % int(n) for n in django.get_version().split(".")];'
+ exp=$exp'vmin=["%02d" % int(n) for n in "\2".split(".")];'
+ exp=$exp'vmax=["%02d" % int(n) for n in "\4".split(".")];'
+ exp=$exp'sys.exit(not (version \1 vmin and version \3 vmax))'
+ exp=$exp'/p'
if ! sed -n "$exp" $reqfile | python3 - ; then
req=`grep ^Django $reqfile`
echo "This program needs $req"
@@ -162,8 +183,8 @@
unset OE_ROOT
-
WEBSERVER=1
+export TOASTER_BUILDSERVER=1
ADDR_PORT="localhost:8000"
unset CMD
for param in $*; do
@@ -171,6 +192,9 @@
noweb )
WEBSERVER=0
;;
+ nobuild )
+ TOASTER_BUILDSERVER=0
+ ;;
start )
CMD=$param
;;
@@ -235,6 +259,7 @@
echo "The system will $CMD."
# Execute the commands
+custom_extention toaster_prepend $CMD $ADDR_PORT
case $CMD in
start )
@@ -256,22 +281,28 @@
if [ ! -f "$TOASTER_DIR/toaster.sqlite" ] ; then
if ! databaseCheck; then
echo "Failed ${CMD}."
- return 4
+ return 4
fi
fi
+ custom_extention noweb_start_postpend $ADDR_PORT
fi
if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
echo "Failed ${CMD}."
return 4
fi
export BITBAKE_UI='toasterui'
- $MANAGE runbuilds \
- </dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
- & echo $! >${BUILDDIR}/.runbuilds.pid
+ if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
+ $MANAGE runbuilds \
+ </dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
+ & echo $! >${BUILDDIR}/.runbuilds.pid
+ else
+ echo "Toaster build server not started."
+ fi
# set fail safe stop system on terminal exit
trap stop_system SIGHUP
echo "Successful ${CMD}."
+ custom_extention toaster_postpend $CMD $ADDR_PORT
return 0
;;
stop )
@@ -279,3 +310,5 @@
echo "Successful ${CMD}."
;;
esac
+custom_extention toaster_postpend $CMD $ADDR_PORT
+
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
index d1ce43e..c721e86 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
@@ -620,7 +620,9 @@
The Git Submodules fetcher is not a complete fetcher
implementation.
The fetcher has known issues where it does not use the
- normal source mirroring infrastructure properly.
+ normal source mirroring infrastructure properly. Further,
+ the submodule sources it fetches are not visible to the
+ licensing and source archiving infrastructures.
</para>
</note>
</para>
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
index 2685c0e..9253eaf 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
@@ -128,15 +128,8 @@
</para>
<note>
- This example was inspired by and drew heavily from these sources:
- <itemizedlist>
- <listitem><para>
- <ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>
- </para></listitem>
- <listitem><para>
- <ulink url="https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
- </para></listitem>
- </itemizedlist>
+ This example was inspired by and drew heavily from
+ <ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>.
</note>
<para>
@@ -269,7 +262,7 @@
and define some key BitBake variables.
For more information on the <filename>bitbake.conf</filename>,
see
- <ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
+ <ulink url='http://git.openembedded.org/bitbake/tree/conf/bitbake.conf'></ulink>.
</para>
<para>Use the following commands to create the <filename>conf</filename>
directory in the project directory:
@@ -352,9 +345,6 @@
Of course, the <filename>base.bbclass</filename> can have much
more depending on which build environments BitBake is
supporting.
- For more information on the <filename>base.bbclass</filename> file,
- you can look at
- <ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
</para></listitem>
<listitem><para><emphasis>Run Bitbake:</emphasis>
After making sure that the <filename>classes/base.bbclass</filename>
@@ -375,8 +365,8 @@
code separate from the general metadata used by BitBake.
Thus, this example creates and uses a layer called "mylayer".
<note>
- You can find additional information on adding a layer at
- <ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
+ You can find additional information on layers at
+ <ulink url='http://www.yoctoproject.org/docs/2.3/bitbake-user-manual/bitbake-user-manual.html#layers'></ulink>.
</note>
</para>
<para>Minimally, you need a recipe file and a layer configuration
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
index ca7f724..08d9afd 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
@@ -440,7 +440,7 @@
Build Checkout:</emphasis>
A final possibility for getting a copy of BitBake is that it
already comes with your checkout of a larger Bitbake-based build
- system, such as Poky or Yocto Project.
+ system, such as Poky.
Rather than manually checking out individual layers and
gluing them together yourself, you can check
out an entire build system.
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
index 1d1e5b3..0cfa53d 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
@@ -669,7 +669,7 @@
<literallayout class='monospaced'>
DEPENDS = "glibc ncurses"
OVERRIDES = "machine:local"
- DEPENDS_append_machine = "libmad"
+ DEPENDS_append_machine = " libmad"
</literallayout>
In this example, <filename>DEPENDS</filename> becomes
"glibc ncurses libmad".
@@ -899,11 +899,12 @@
<para>
The <filename>inherit</filename> directive is a rudimentary
- means of specifying what classes of functionality your
- recipes require.
+ means of specifying functionality contained in class files
+ that your recipes require.
For example, you can easily abstract out the tasks involved in
building a package that uses Autoconf and Automake and put
- those tasks into a class file that can be used by your recipe.
+ those tasks into a class file and then have your recipe
+ inherit that class file.
</para>
<para>
@@ -922,13 +923,24 @@
inherited class within your recipe by doing so
after the "inherit" statement.
</note>
+ If you want to use the directive to inherit
+ multiple classes, separate them with spaces.
+ The following example shows how to inherit both the
+ <filename>buildhistory</filename> and <filename>rm_work</filename>
+ classes:
+ <literallayout class='monospaced'>
+ inherit buildhistory rm_work
+ </literallayout>
</para>
<para>
- If necessary, it is possible to inherit a class
- conditionally by using
- a variable expression after the <filename>inherit</filename>
- statement.
+ An advantage with the inherit directive as compared to both
+ the
+ <link linkend='include-directive'>include</link> and
+ <link linkend='require-inclusion'>require</link> directives
+ is that you can inherit class files conditionally.
+ You can accomplish this by using a variable expression
+ after the <filename>inherit</filename> statement.
Here is an example:
<literallayout class='monospaced'>
inherit ${VARNAME}
@@ -985,6 +997,17 @@
</para>
<para>
+ The include directive is a more generic method of including
+ functionality as compared to the
+ <link linkend='inherit-directive'>inherit</link> directive,
+ which is restricted to class (i.e. <filename>.bbclass</filename>)
+ files.
+ The include directive is applicable for any other kind of
+ shared or encapsulated functionality or configuration that
+ does not suit a <filename>.bbclass</filename> file.
+ </para>
+
+ <para>
As an example, suppose you needed a recipe to include some
self-test definitions:
<literallayout class='monospaced'>
@@ -1018,6 +1041,18 @@
</para>
<para>
+ The require directive, like the include directive previously
+ described, is a more generic method of including
+ functionality as compared to the
+ <link linkend='inherit-directive'>inherit</link> directive,
+ which is restricted to class (i.e. <filename>.bbclass</filename>)
+ files.
+ The require directive is applicable for any other kind of
+ shared or encapsulated functionality or configuration that
+ does not suit a <filename>.bbclass</filename> file.
+ </para>
+
+ <para>
Similar to how BitBake handles
<link linkend='include-directive'><filename>include</filename></link>,
if the path specified
@@ -1049,8 +1084,9 @@
<para>
When creating a configuration file (<filename>.conf</filename>),
- you can use the <filename>INHERIT</filename> directive to
- inherit a class.
+ you can use the
+ <link linkend='var-INHERIT'><filename>INHERIT</filename></link>
+ configuration directive to inherit a class.
BitBake only supports this directive when used within
a configuration file.
</para>
@@ -1083,7 +1119,7 @@
<filename>autotools</filename> and <filename>pkgconfig</filename>
classes:
<literallayout class='monospaced'>
- inherit autotools pkgconfig
+ INHERIT += "autotools pkgconfig"
</literallayout>
</para>
</section>
@@ -2029,7 +2065,7 @@
before any tasks are executed so would be in the global
configuration datastore namespace.
No recipe-specific metadata exists in that namespace.
- The "BuildStarted" and "buildCompleted" events also run in
+ The "BuildStarted" and "BuildCompleted" events also run in
the main cooker/server process rather than any worker context.
Thus, any changes made to the datastore would be seen by other
cooker/server events within the current build but not seen
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
index 0e89bf2..d89e123 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
@@ -1143,8 +1143,6 @@
<glossdef>
<para>
Sets the base location where layers are stored.
- By default, this location is set to
- <filename>${COREBASE}</filename>.
This setting is used in conjunction with
<filename>bitbake-layers layerindex-fetch</filename> and
tells <filename>bitbake-layers</filename> where to place
@@ -1596,9 +1594,19 @@
<glossentry id='var-INHERIT'><glossterm>INHERIT</glossterm>
<glossdef>
<para>
- Causes the named class to be inherited at
- this point during parsing.
- The variable is only valid in configuration files.
+ Causes the named class or classes to be inherited globally.
+ Anonymous functions in the class or classes
+ are not executed for the
+ base configuration and in each individual recipe.
+ The OpenEmbedded build system ignores changes to
+ <filename>INHERIT</filename> in individual recipes.
+ </para>
+
+ <para>
+ For more information on <filename>INHERIT</filename>, see
+ the
+ "<link linkend="inherit-configuration-directive"><filename>INHERIT</filename> Configuration Directive</link>"
+ section.
</para>
</glossdef>
</glossentry>
@@ -1893,7 +1901,7 @@
Here are two examples:
<literallayout class='monospaced'>
PREFERRED_VERSION_python = "2.7.3"
- PREFERRED_VERSION_linux-yocto = "3.10%"
+ PREFERRED_VERSION_linux-yocto = "4.12%"
</literallayout>
</para>
</glossdef>
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/COW.py b/import-layers/yocto-poky/bitbake/lib/bb/COW.py
index 36ebbd9..bec6208 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/COW.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/COW.py
@@ -3,7 +3,7 @@
#
# This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
#
-# Copyright (C) 2006 Tim Amsell
+# Copyright (C) 2006 Tim Ansell
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/__init__.py
index bfe0ca5..5268831 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/__init__.py
@@ -21,7 +21,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-__version__ = "1.34.0"
+__version__ = "1.36.0"
import sys
if sys.version_info < (3, 4, 0):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/cache.py b/import-layers/yocto-poky/bitbake/lib/bb/cache.py
index e7eeb4f..86ce0e7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/cache.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/cache.py
@@ -86,9 +86,9 @@
class CoreRecipeInfo(RecipeInfoCommon):
__slots__ = ()
- cachefile = "bb_cache.dat"
+ cachefile = "bb_cache.dat"
- def __init__(self, filename, metadata):
+ def __init__(self, filename, metadata):
self.file_depends = metadata.getVar('__depends', False)
self.timestamp = bb.parse.cached_mtime(filename)
self.variants = self.listvar('__VARIANTS', metadata) + ['']
@@ -107,7 +107,7 @@
self.pn = self.getvar('PN', metadata)
self.packages = self.listvar('PACKAGES', metadata)
- if not self.pn in self.packages:
+ if not self.packages:
self.packages.append(self.pn)
self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
@@ -122,7 +122,7 @@
self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
self.stamp = self.getvar('STAMP', metadata)
- self.stampclean = self.getvar('STAMPCLEAN', metadata)
+ self.stampclean = self.getvar('STAMPCLEAN', metadata)
self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
@@ -217,7 +217,7 @@
cachedata.packages_dynamic[package].append(fn)
# Build hash of runtime depends and recommends
- for package in self.packages + [self.pn]:
+ for package in self.packages:
cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
@@ -375,8 +375,8 @@
data = databuilder.data
# Pass caches_array information into Cache Constructor
- # It will be used later for deciding whether we
- # need extra cache file dump/load support
+ # It will be used later for deciding whether we
+ # need extra cache file dump/load support
self.caches_array = caches_array
self.cachedir = data.getVar("CACHE")
self.clean = set()
@@ -421,7 +421,7 @@
cachesize += os.fstat(cachefile.fileno()).st_size
bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
-
+
for cache_class in self.caches_array:
cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
with open(cachefile, "rb") as cachefile:
@@ -438,8 +438,8 @@
logger.info('Cache version mismatch, rebuilding...')
return
elif bitbake_ver != bb.__version__:
- logger.info('Bitbake version mismatch, rebuilding...')
- return
+ logger.info('Bitbake version mismatch, rebuilding...')
+ return
# Load the rest of the cache file
current_progress = 0
@@ -616,13 +616,13 @@
a = fl.find(":True")
b = fl.find(":False")
if ((a < 0) and b) or ((b > 0) and (b < a)):
- f = fl[:b+6]
- fl = fl[b+7:]
+ f = fl[:b+6]
+ fl = fl[b+7:]
elif ((b < 0) and a) or ((a > 0) and (a < b)):
- f = fl[:a+5]
- fl = fl[a+6:]
+ f = fl[:a+5]
+ fl = fl[a+6:]
else:
- break
+ break
fl = fl.strip()
if "*" in f:
continue
@@ -886,4 +886,3 @@
p.dump([data, self.__class__.CACHE_VERSION])
bb.utils.unlockfile(glf)
-
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/command.py b/import-layers/yocto-poky/bitbake/lib/bb/command.py
index a919f58..6c966e3 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/command.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/command.py
@@ -50,6 +50,8 @@
def __init__(self, message):
self.error = message
CommandExit.__init__(self, 1)
+ def __str__(self):
+ return "Command execution failed: %s" % self.error
class CommandError(Exception):
pass
@@ -76,7 +78,8 @@
if not hasattr(command_method, 'readonly') or False == getattr(command_method, 'readonly'):
return None, "Not able to execute not readonly commands in readonly mode"
try:
- if getattr(command_method, 'needconfig', False):
+ self.cooker.process_inotify_updates()
+ if getattr(command_method, 'needconfig', True):
self.cooker.updateCacheSync()
result = command_method(self, commandline)
except CommandError as exc:
@@ -96,6 +99,7 @@
def runAsyncCommand(self):
try:
+ self.cooker.process_inotify_updates()
if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
# updateCache will trigger a shutdown of the parser
# and then raise BBHandledException triggering an exit
@@ -141,6 +145,9 @@
self.currentAsyncCommand = None
self.cooker.finishcommand()
+ def reset(self):
+ self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
+
def split_mc_pn(pn):
if pn.startswith("multiconfig:"):
_, mc, pn = pn.split(":", 2)
@@ -233,59 +240,15 @@
command.cooker.configuration.postfile = postfiles
setPrePostConfFiles.needconfig = False
- def getCpuCount(self, command, params):
- """
- Get the CPU count on the bitbake server
- """
- return bb.utils.cpu_count()
- getCpuCount.readonly = True
- getCpuCount.needconfig = False
-
def matchFile(self, command, params):
fMatch = params[0]
return command.cooker.matchFile(fMatch)
matchFile.needconfig = False
- def generateNewImage(self, command, params):
- image = params[0]
- base_image = params[1]
- package_queue = params[2]
- timestamp = params[3]
- description = params[4]
- return command.cooker.generateNewImage(image, base_image,
- package_queue, timestamp, description)
-
- def ensureDir(self, command, params):
- directory = params[0]
- bb.utils.mkdirhier(directory)
- ensureDir.needconfig = False
-
- def setVarFile(self, command, params):
- """
- Save a variable in a file; used for saving in a configuration file
- """
- var = params[0]
- val = params[1]
- default_file = params[2]
- op = params[3]
- command.cooker.modifyConfigurationVar(var, val, default_file, op)
- setVarFile.needconfig = False
-
- def removeVarFile(self, command, params):
- """
- Remove a variable declaration from a file
- """
- var = params[0]
- command.cooker.removeConfigurationVar(var)
- removeVarFile.needconfig = False
-
- def createConfigFile(self, command, params):
- """
- Create an extra configuration file
- """
- name = params[0]
- command.cooker.createConfigFile(name)
- createConfigFile.needconfig = False
+ def getUIHandlerNum(self, command, params):
+ return bb.event.get_uihandler()
+ getUIHandlerNum.needconfig = False
+ getUIHandlerNum.readonly = True
def setEventMask(self, command, params):
handlerNum = params[0]
@@ -323,6 +286,7 @@
parseConfiguration.needconfig = False
def getLayerPriorities(self, command, params):
+ command.cooker.parseConfiguration()
ret = []
# regex objects cannot be marshalled by xmlrpc
for collection, pattern, regex, pri in command.cooker.bbfile_config_priorities:
@@ -354,6 +318,38 @@
return command.cooker.recipecaches[mc].pkg_pepvpr
getRecipeVersions.readonly = True
+ def getRecipeProvides(self, command, params):
+ try:
+ mc = params[0]
+ except IndexError:
+ mc = ''
+ return command.cooker.recipecaches[mc].fn_provides
+ getRecipeProvides.readonly = True
+
+ def getRecipePackages(self, command, params):
+ try:
+ mc = params[0]
+ except IndexError:
+ mc = ''
+ return command.cooker.recipecaches[mc].packages
+ getRecipePackages.readonly = True
+
+ def getRecipePackagesDynamic(self, command, params):
+ try:
+ mc = params[0]
+ except IndexError:
+ mc = ''
+ return command.cooker.recipecaches[mc].packages_dynamic
+ getRecipePackagesDynamic.readonly = True
+
+ def getRProviders(self, command, params):
+ try:
+ mc = params[0]
+ except IndexError:
+ mc = ''
+ return command.cooker.recipecaches[mc].rproviders
+ getRProviders.readonly = True
+
def getRuntimeDepends(self, command, params):
ret = []
try:
@@ -592,11 +588,14 @@
bfile = params[0]
task = params[1]
if len(params) > 2:
- hidewarning = params[2]
+ internal = params[2]
else:
- hidewarning = False
+ internal = False
- command.cooker.buildFile(bfile, task, hidewarning)
+ if internal:
+ command.cooker.buildFileInternal(bfile, task, fireevents=False, quietlog=True)
+ else:
+ command.cooker.buildFile(bfile, task)
buildFile.needcache = False
def buildTargets(self, command, params):
@@ -646,17 +645,6 @@
command.finishAsyncCommand()
generateTargetsTree.needcache = True
- def findCoreBaseFiles(self, command, params):
- """
- Find certain files in COREBASE directory. i.e. Layers
- """
- subdir = params[0]
- filename = params[1]
-
- command.cooker.findCoreBaseFiles(subdir, filename)
- command.finishAsyncCommand()
- findCoreBaseFiles.needcache = False
-
def findConfigFiles(self, command, params):
"""
Find config files which provide appropriate values
@@ -764,3 +752,14 @@
command.finishAsyncCommand()
clientComplete.needcache = False
+ def findSigInfo(self, command, params):
+ """
+ Find signature info files via the signature generator
+ """
+ pn = params[0]
+ taskname = params[1]
+ sigs = params[2]
+ res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.data)
+ bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.data)
+ command.finishAsyncCommand()
+ findSigInfo.needcache = False
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/cooker.py b/import-layers/yocto-poky/bitbake/lib/bb/cooker.py
index 3c9e88c..c7fdd72 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/cooker.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/cooker.py
@@ -181,15 +181,15 @@
self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
- pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
+ pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
self.watcher = pyinotify.WatchManager()
self.watcher.bbseen = []
self.watcher.bbwatchedfiles = []
self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
- # If being called by something like tinfoil, we need to clean cached data
+ # If being called by something like tinfoil, we need to clean cached data
# which may now be invalid
- bb.parse.__mtime_cache = {}
+ bb.parse.clear_cache()
bb.parse.BBHandler.cached_statements = {}
self.ui_cmdline = None
@@ -205,31 +205,11 @@
self.inotify_modified_files = []
- def _process_inotify_updates(server, notifier_list, abort):
- for n in notifier_list:
- if n.check_events(timeout=0):
- # read notified events and enqeue them
- n.read_events()
- n.process_events()
+ def _process_inotify_updates(server, cooker, abort):
+ cooker.process_inotify_updates()
return 1.0
- self.configuration.server_register_idlecallback(_process_inotify_updates, [self.confignotifier, self.notifier])
-
- self.baseconfig_valid = True
- self.parsecache_valid = False
-
- # Take a lock so only one copy of bitbake can run against a given build
- # directory at a time
- if not self.lockBitbake():
- bb.fatal("Only one copy of bitbake should be run against a build directory")
- try:
- self.lock.seek(0)
- self.lock.truncate()
- if len(configuration.interface) >= 2:
- self.lock.write("%s:%s\n" % (configuration.interface[0], configuration.interface[1]));
- self.lock.flush()
- except:
- pass
+ self.configuration.server_register_idlecallback(_process_inotify_updates, self)
# TOSTOP must not be set or our children will hang when they output
try:
@@ -253,10 +233,19 @@
# Let SIGHUP exit as SIGTERM
signal.signal(signal.SIGHUP, self.sigterm_exception)
+ def process_inotify_updates(self):
+ for n in [self.confignotifier, self.notifier]:
+ if n.check_events(timeout=0):
+ # read notified events and enqeue them
+ n.read_events()
+ n.process_events()
+
def config_notifications(self, event):
if event.maskname == "IN_Q_OVERFLOW":
bb.warn("inotify event queue overflowed, invalidating caches.")
+ self.parsecache_valid = False
self.baseconfig_valid = False
+ bb.parse.clear_cache()
return
if not event.pathname in self.configwatcher.bbwatchedfiles:
return
@@ -268,6 +257,10 @@
if event.maskname == "IN_Q_OVERFLOW":
bb.warn("inotify event queue overflowed, invalidating caches.")
self.parsecache_valid = False
+ bb.parse.clear_cache()
+ return
+ if event.pathname.endswith("bitbake-cookerdaemon.log") \
+ or event.pathname.endswith("bitbake.lock"):
return
if not event.pathname in self.inotify_modified_files:
self.inotify_modified_files.append(event.pathname)
@@ -288,7 +281,7 @@
watchtarget = None
while True:
# We try and add watches for files that don't exist but if they did, would influence
- # the parser. The parent directory of these files may not exist, in which case we need
+ # the parser. The parent directory of these files may not exist, in which case we need
# to watch any parent that does exist for changes.
try:
watcher.add_watch(f, self.watchmask, quiet=False)
@@ -382,6 +375,15 @@
self.data.renameVar("__depends", "__base_depends")
self.add_filewatch(self.data.getVar("__base_depends", False), self.configwatcher)
+ self.baseconfig_valid = True
+ self.parsecache_valid = False
+
+ def handlePRServ(self):
+ # Setup a PR Server based on the new configuration
+ try:
+ self.prhost = prserv.serv.auto_start(self.data)
+ except prserv.serv.PRServiceConfigError as e:
+ bb.fatal("Unable to start PR Server, exitting")
def enableDataTracking(self):
self.configuration.tracking = True
@@ -393,138 +395,6 @@
if hasattr(self, "data"):
self.data.disableTracking()
- def modifyConfigurationVar(self, var, val, default_file, op):
- if op == "append":
- self.appendConfigurationVar(var, val, default_file)
- elif op == "set":
- self.saveConfigurationVar(var, val, default_file, "=")
- elif op == "earlyAssign":
- self.saveConfigurationVar(var, val, default_file, "?=")
-
-
- def appendConfigurationVar(self, var, val, default_file):
- #add append var operation to the end of default_file
- default_file = bb.cookerdata.findConfigFile(default_file, self.data)
-
- total = "#added by hob"
- total += "\n%s += \"%s\"\n" % (var, val)
-
- with open(default_file, 'a') as f:
- f.write(total)
-
- #add to history
- loginfo = {"op":"append", "file":default_file, "line":total.count("\n")}
- self.data.appendVar(var, val, **loginfo)
-
- def saveConfigurationVar(self, var, val, default_file, op):
-
- replaced = False
- #do not save if nothing changed
- if str(val) == self.data.getVar(var, False):
- return
-
- conf_files = self.data.varhistory.get_variable_files(var)
-
- #format the value when it is a list
- if isinstance(val, list):
- listval = ""
- for value in val:
- listval += "%s " % value
- val = listval
-
- topdir = self.data.getVar("TOPDIR", False)
-
- #comment or replace operations made on var
- for conf_file in conf_files:
- if topdir in conf_file:
- with open(conf_file, 'r') as f:
- contents = f.readlines()
-
- lines = self.data.varhistory.get_variable_lines(var, conf_file)
- for line in lines:
- total = ""
- i = 0
- for c in contents:
- total += c
- i = i + 1
- if i==int(line):
- end_index = len(total)
- index = total.rfind(var, 0, end_index)
-
- begin_line = total.count("\n",0,index)
- end_line = int(line)
-
- #check if the variable was saved before in the same way
- #if true it replace the place where the variable was declared
- #else it comments it
- if contents[begin_line-1]== "#added by hob\n":
- contents[begin_line] = "%s %s \"%s\"\n" % (var, op, val)
- replaced = True
- else:
- for ii in range(begin_line, end_line):
- contents[ii] = "#" + contents[ii]
-
- with open(conf_file, 'w') as f:
- f.writelines(contents)
-
- if replaced == False:
- #remove var from history
- self.data.varhistory.del_var_history(var)
-
- #add var to the end of default_file
- default_file = bb.cookerdata.findConfigFile(default_file, self.data)
-
- #add the variable on a single line, to be easy to replace the second time
- total = "\n#added by hob"
- total += "\n%s %s \"%s\"\n" % (var, op, val)
-
- with open(default_file, 'a') as f:
- f.write(total)
-
- #add to history
- loginfo = {"op":"set", "file":default_file, "line":total.count("\n")}
- self.data.setVar(var, val, **loginfo)
-
- def removeConfigurationVar(self, var):
- conf_files = self.data.varhistory.get_variable_files(var)
- topdir = self.data.getVar("TOPDIR", False)
-
- for conf_file in conf_files:
- if topdir in conf_file:
- with open(conf_file, 'r') as f:
- contents = f.readlines()
-
- lines = self.data.varhistory.get_variable_lines(var, conf_file)
- for line in lines:
- total = ""
- i = 0
- for c in contents:
- total += c
- i = i + 1
- if i==int(line):
- end_index = len(total)
- index = total.rfind(var, 0, end_index)
-
- begin_line = total.count("\n",0,index)
-
- #check if the variable was saved before in the same way
- if contents[begin_line-1]== "#added by hob\n":
- contents[begin_line-1] = contents[begin_line] = "\n"
- else:
- contents[begin_line] = "\n"
- #remove var from history
- self.data.varhistory.del_var_history(var, conf_file, line)
- #remove variable
- self.data.delVar(var)
-
- with open(conf_file, 'w') as f:
- f.writelines(contents)
-
- def createConfigFile(self, name):
- path = os.getcwd()
- confpath = os.path.join(path, "conf", name)
- open(confpath, 'w').close()
-
def parseConfiguration(self):
# Set log file verbosity
verboselogs = bb.utils.to_boolean(self.data.getVar("BB_VERBOSE_LOGS", False))
@@ -547,21 +417,27 @@
self.handleCollections(self.data.getVar("BBFILE_COLLECTIONS"))
+ self.parsecache_valid = False
+
def updateConfigOpts(self, options, environment, cmdline):
self.ui_cmdline = cmdline
clean = True
for o in options:
if o in ['prefile', 'postfile']:
+ # Only these options may require a reparse
+ try:
+ if getattr(self.configuration, o) == options[o]:
+ # Value is the same, no need to mark dirty
+ continue
+ except AttributeError:
+ pass
+ logger.debug(1, "Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
+ print("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
clean = False
- server_val = getattr(self.configuration, "%s_server" % o)
- if not options[o] and server_val:
- # restore value provided on server start
- setattr(self.configuration, o, server_val)
- continue
setattr(self.configuration, o, options[o])
for k in bb.utils.approved_variables():
if k in environment and k not in self.configuration.env:
- logger.debug(1, "Updating environment variable %s to %s" % (k, environment[k]))
+ logger.debug(1, "Updating new environment variable %s to %s" % (k, environment[k]))
self.configuration.env[k] = environment[k]
clean = False
if k in self.configuration.env and k not in environment:
@@ -569,14 +445,13 @@
del self.configuration.env[k]
clean = False
if k not in self.configuration.env and k not in environment:
- continue
+ continue
if environment[k] != self.configuration.env[k]:
- logger.debug(1, "Updating environment variable %s to %s" % (k, environment[k]))
+ logger.debug(1, "Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
self.configuration.env[k] = environment[k]
clean = False
if not clean:
logger.debug(1, "Base environment change, triggering reparse")
- self.baseconfig_valid = False
self.reset()
def runCommands(self, server, data, abort):
@@ -616,6 +491,12 @@
if not pkgs_to_build:
pkgs_to_build = []
+ orig_tracking = self.configuration.tracking
+ if not orig_tracking:
+ self.enableDataTracking()
+ self.reset()
+
+
if buildfile:
# Parse the configuration here. We need to do it explicitly here since
# this showEnvironment() code path doesn't use the cache
@@ -660,6 +541,9 @@
if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
+ if not orig_tracking:
+ self.disableDataTracking()
+ self.reset()
def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
"""
@@ -817,12 +701,12 @@
depend_tree["pn"][pn][ei] = vars(self.recipecaches[mc])[ei][taskfn]
+ dotname = "%s.%s" % (pn, bb.runqueue.taskname_from_tid(tid))
+ if not dotname in depend_tree["tdepends"]:
+ depend_tree["tdepends"][dotname] = []
for dep in rq.rqdata.runtaskentries[tid].depends:
(depmc, depfn, deptaskname, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
deppn = self.recipecaches[mc].pkg_fn[deptaskfn]
- dotname = "%s.%s" % (pn, bb.runqueue.taskname_from_tid(tid))
- if not dotname in depend_tree["tdepends"]:
- depend_tree["tdepends"][dotname] = []
depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
if taskfn not in seen_fns:
seen_fns.append(taskfn)
@@ -913,13 +797,13 @@
seen_fns.append(taskfn)
depend_tree["depends"][pn] = []
- for item in taskdata[mc].depids[taskfn]:
+ for dep in taskdata[mc].depids[taskfn]:
pn_provider = ""
if dep in taskdata[mc].build_targets and taskdata[mc].build_targets[dep]:
fn_provider = taskdata[mc].build_targets[dep][0]
pn_provider = self.recipecaches[mc].pkg_fn[fn_provider]
else:
- pn_provider = item
+ pn_provider = dep
pn_provider = self.add_mc_prefix(mc, pn_provider)
depend_tree["depends"][pn].append(pn_provider)
@@ -1046,18 +930,6 @@
providerlog.error("conflicting preferences for %s: both %s and %s specified", providee, provider, self.recipecaches[mc].preferred[providee])
self.recipecaches[mc].preferred[providee] = provider
- def findCoreBaseFiles(self, subdir, configfile):
- corebase = self.data.getVar('COREBASE') or ""
- paths = []
- for root, dirs, files in os.walk(corebase + '/' + subdir):
- for d in dirs:
- configfilepath = os.path.join(root, d, configfile)
- if os.path.exists(configfilepath):
- paths.append(os.path.join(root, d))
-
- if paths:
- bb.event.fire(bb.event.CoreBaseFilesFound(paths), self.data)
-
def findConfigFilePath(self, configfile):
"""
Find the location on disk of configfile and if it exists and was parsed by BitBake
@@ -1314,12 +1186,26 @@
"""
Setup any variables needed before starting a build
"""
- t = time.gmtime()
- if not self.data.getVar("BUILDNAME", False):
- self.data.setVar("BUILDNAME", "${DATE}${TIME}")
- self.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', t))
- self.data.setVar("DATE", time.strftime('%Y%m%d', t))
- self.data.setVar("TIME", time.strftime('%H%M%S', t))
+ t = time.gmtime()
+ for mc in self.databuilder.mcdata:
+ ds = self.databuilder.mcdata[mc]
+ if not ds.getVar("BUILDNAME", False):
+ ds.setVar("BUILDNAME", "${DATE}${TIME}")
+ ds.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', t))
+ ds.setVar("DATE", time.strftime('%Y%m%d', t))
+ ds.setVar("TIME", time.strftime('%H%M%S', t))
+
+ def reset_mtime_caches(self):
+ """
+ Reset mtime caches - this is particularly important when memory resident as something
+ which is cached is not unlikely to have changed since the last invocation (e.g. a
+ file associated with a recipe might have been modified by the user).
+ """
+ build.reset_cache()
+ bb.fetch._checksum_cache.mtime_cache.clear()
+ siggen_cache = getattr(bb.parse.siggen, 'checksum_cache', None)
+ if siggen_cache:
+ bb.parse.siggen.checksum_cache.mtime_cache.clear()
def matchFiles(self, bf):
"""
@@ -1360,16 +1246,22 @@
raise NoSpecificMatch
return matches[0]
- def buildFile(self, buildfile, task, hidewarning=False):
+ def buildFile(self, buildfile, task):
"""
Build the file matching regexp buildfile
"""
bb.event.fire(bb.event.BuildInit(), self.data)
- if not hidewarning:
- # Too many people use -b because they think it's how you normally
- # specify a target to be built, so show a warning
- bb.warn("Buildfile specified, dependencies will not be handled. If this is not what you want, do not use -b / --buildfile.")
+ # Too many people use -b because they think it's how you normally
+ # specify a target to be built, so show a warning
+ bb.warn("Buildfile specified, dependencies will not be handled. If this is not what you want, do not use -b / --buildfile.")
+
+ self.buildFileInternal(buildfile, task)
+
+ def buildFileInternal(self, buildfile, task, fireevents=True, quietlog=False):
+ """
+ Build the file matching regexp buildfile
+ """
# Parse the configuration here. We need to do it explicitly here since
# buildFile() doesn't use the cache
@@ -1385,6 +1277,7 @@
fn = self.matchFile(fn)
self.buildSetVars()
+ self.reset_mtime_caches()
bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
@@ -1411,8 +1304,8 @@
# Remove external dependencies
self.recipecaches[mc].task_deps[fn]['depends'] = {}
self.recipecaches[mc].deps[fn] = []
- self.recipecaches[mc].rundeps[fn] = []
- self.recipecaches[mc].runrecs[fn] = []
+ self.recipecaches[mc].rundeps[fn] = defaultdict(list)
+ self.recipecaches[mc].runrecs[fn] = defaultdict(list)
# Invalidate task for target if force mode active
if self.configuration.force:
@@ -1422,10 +1315,15 @@
# Setup taskdata structure
taskdata = {}
taskdata[mc] = bb.taskdata.TaskData(self.configuration.abort)
- taskdata[mc].add_provider(self.data, self.recipecaches[mc], item)
+ taskdata[mc].add_provider(self.databuilder.mcdata[mc], self.recipecaches[mc], item)
- buildname = self.data.getVar("BUILDNAME")
- bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.data)
+ if quietlog:
+ rqloglevel = bb.runqueue.logger.getEffectiveLevel()
+ bb.runqueue.logger.setLevel(logging.WARNING)
+
+ buildname = self.databuilder.mcdata[mc].getVar("BUILDNAME")
+ if fireevents:
+ bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.databuilder.mcdata[mc])
# Execute the runqueue
runlist = [[mc, item, task, fn]]
@@ -1452,11 +1350,20 @@
retval = False
except SystemExit as exc:
self.command.finishAsyncCommand(str(exc))
+ if quietlog:
+ bb.runqueue.logger.setLevel(rqloglevel)
return False
if not retval:
- bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, item, failures, interrupted), self.data)
+ if fireevents:
+ bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, item, failures, interrupted), self.databuilder.mcdata[mc])
self.command.finishAsyncCommand(msg)
+ # We trashed self.recipecaches above
+ self.parsecache_valid = False
+ self.configuration.limited_deps = False
+ bb.parse.siggen.reset(self.data)
+ if quietlog:
+ bb.runqueue.logger.setLevel(rqloglevel)
return False
if retval is True:
return True
@@ -1491,14 +1398,17 @@
return False
if not retval:
- bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, targets, failures, interrupted), self.data)
- self.command.finishAsyncCommand(msg)
+ try:
+ for mc in self.multiconfigs:
+ bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, targets, failures, interrupted), self.databuilder.mcdata[mc])
+ finally:
+ self.command.finishAsyncCommand(msg)
return False
if retval is True:
return True
return retval
- build.reset_cache()
+ self.reset_mtime_caches()
self.buildSetVars()
# If we are told to do the None task then query the default task
@@ -1523,7 +1433,8 @@
ntargets.append("multiconfig:%s:%s:%s" % (target[0], target[1], target[2]))
ntargets.append("%s:%s" % (target[1], target[2]))
- bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.data)
+ for mc in self.multiconfigs:
+ bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.databuilder.mcdata[mc])
rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
if 'universe' in targets:
@@ -1556,55 +1467,6 @@
return dump
- def generateNewImage(self, image, base_image, package_queue, timestamp, description):
- '''
- Create a new image with a "require"/"inherit" base_image statement
- '''
- if timestamp:
- image_name = os.path.splitext(image)[0]
- timestr = time.strftime("-%Y%m%d-%H%M%S")
- dest = image_name + str(timestr) + ".bb"
- else:
- if not image.endswith(".bb"):
- dest = image + ".bb"
- else:
- dest = image
-
- basename = False
- if base_image:
- with open(base_image, 'r') as f:
- require_line = f.readline()
- p = re.compile("IMAGE_BASENAME *=")
- for line in f:
- if p.search(line):
- basename = True
-
- with open(dest, "w") as imagefile:
- if base_image is None:
- imagefile.write("inherit core-image\n")
- else:
- topdir = self.data.getVar("TOPDIR", False)
- if topdir in base_image:
- base_image = require_line.split()[1]
- imagefile.write("require " + base_image + "\n")
- image_install = "IMAGE_INSTALL = \""
- for package in package_queue:
- image_install += str(package) + " "
- image_install += "\"\n"
- imagefile.write(image_install)
-
- description_var = "DESCRIPTION = \"" + description + "\"\n"
- imagefile.write(description_var)
-
- if basename:
- # If this is overwritten in a inherited image, reset it to default
- image_basename = "IMAGE_BASENAME = \"${PN}\"\n"
- imagefile.write(image_basename)
-
- self.state = state.initial
- if timestamp:
- return timestr
-
def updateCacheSync(self):
if self.state == state.running:
return
@@ -1619,8 +1481,7 @@
if not self.baseconfig_valid:
logger.debug(1, "Reloading base configuration data")
self.initConfigurationData()
- self.baseconfig_valid = True
- self.parsecache_valid = False
+ self.handlePRServ()
# This is called for all async commands when self.state != running
def updateCache(self):
@@ -1636,6 +1497,7 @@
self.updateCacheSync()
if self.state != state.parsing and not self.parsecache_valid:
+ bb.parse.siggen.reset(self.data)
self.parseConfiguration ()
if CookerFeatures.SEND_SANITYEVENTS in self.featureset:
for mc in self.multiconfigs:
@@ -1723,46 +1585,14 @@
return pkgs_to_build
def pre_serve(self):
- # Empty the environment. The environment will be populated as
- # necessary from the data store.
- #bb.utils.empty_environment()
- try:
- self.prhost = prserv.serv.auto_start(self.data)
- except prserv.serv.PRServiceConfigError:
- bb.event.fire(CookerExit(), self.data)
- self.state = state.error
+ # We now are in our own process so we can call this here.
+ # PRServ exits if its parent process exits
+ self.handlePRServ()
return
def post_serve(self):
- prserv.serv.auto_shutdown(self.data)
+ prserv.serv.auto_shutdown()
bb.event.fire(CookerExit(), self.data)
- lockfile = self.lock.name
- self.lock.close()
- self.lock = None
-
- while not self.lock:
- with bb.utils.timeout(3):
- self.lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=True)
- if not self.lock:
- # Some systems may not have lsof available
- procs = None
- try:
- procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
- if procs is None:
- # Fall back to fuser if lsof is unavailable
- try:
- procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
- except OSError as e:
- if e.errno != errno.ENOENT:
- raise
-
- msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
- if procs:
- msg += ":\n%s" % str(procs)
- print(msg)
def shutdown(self, force = False):
@@ -1784,46 +1614,12 @@
def clientComplete(self):
"""Called when the client is done using the server"""
- if self.configuration.server_only:
- self.finishcommand()
- else:
- self.shutdown(True)
+ self.finishcommand()
+ self.extraconfigdata = {}
+ self.command.reset()
+ self.databuilder.reset()
+ self.data = self.databuilder.data
- def lockBitbake(self):
- if not hasattr(self, 'lock'):
- self.lock = None
- if self.data:
- lockfile = self.data.expand("${TOPDIR}/bitbake.lock")
- if lockfile:
- self.lock = bb.utils.lockfile(lockfile, False, False)
- return self.lock
-
- def unlockBitbake(self):
- if hasattr(self, 'lock') and self.lock:
- bb.utils.unlockfile(self.lock)
-
-def server_main(cooker, func, *args):
- cooker.pre_serve()
-
- if cooker.configuration.profile:
- try:
- import cProfile as profile
- except:
- import profile
- prof = profile.Profile()
-
- ret = profile.Profile.runcall(prof, func, *args)
-
- prof.dump_stats("profile.log")
- bb.utils.process_profilelog("profile.log")
- print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
-
- else:
- ret = func(*args)
-
- cooker.post_serve()
-
- return ret
class CookerExit(bb.event.Event):
"""
@@ -1890,15 +1686,23 @@
# We need to track where we look so that we can add inotify watches. There
# is no nice way to do this, this is horrid. We intercept the os.listdir()
- # calls while we run glob().
+ # (or os.scandir() for python 3.6+) calls while we run glob().
origlistdir = os.listdir
+ if hasattr(os, 'scandir'):
+ origscandir = os.scandir
searchdirs = []
def ourlistdir(d):
searchdirs.append(d)
return origlistdir(d)
+ def ourscandir(d):
+ searchdirs.append(d)
+ return origscandir(d)
+
os.listdir = ourlistdir
+ if hasattr(os, 'scandir'):
+ os.scandir = ourscandir
try:
# Can't use set here as order is important
newfiles = []
@@ -1918,6 +1722,8 @@
newfiles.append(g)
finally:
os.listdir = origlistdir
+ if hasattr(os, 'scandir'):
+ os.scandir = origscandir
bbmask = config.getVar('BBMASK')
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py b/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py
index e408a35..fab47c7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py
@@ -41,10 +41,6 @@
self.options.pkgs_to_build = targets or []
- self.options.tracking = False
- if hasattr(self.options, "show_environment") and self.options.show_environment:
- self.options.tracking = True
-
for key, val in self.options.__dict__.items():
setattr(self, key, val)
@@ -73,15 +69,15 @@
def updateToServer(self, server, environment):
options = {}
- for o in ["abort", "tryaltconfigs", "force", "invalidate_stamp",
- "verbose", "debug", "dry_run", "dump_signatures",
+ for o in ["abort", "force", "invalidate_stamp",
+ "verbose", "debug", "dry_run", "dump_signatures",
"debug_domains", "extra_assume_provided", "profile",
- "prefile", "postfile"]:
+ "prefile", "postfile", "server_timeout"]:
options[o] = getattr(self.options, o)
ret, error = server.runCommand(["updateConfig", options, environment, sys.argv])
if error:
- raise Exception("Unable to update the server configuration with local parameters: %s" % error)
+ raise Exception("Unable to update the server configuration with local parameters: %s" % error)
def parseActions(self):
# Parse any commandline into actions
@@ -131,8 +127,6 @@
self.extra_assume_provided = []
self.prefile = []
self.postfile = []
- self.prefile_server = []
- self.postfile_server = []
self.debug = 0
self.cmd = None
self.abort = True
@@ -144,7 +138,8 @@
self.dump_signatures = []
self.dry_run = False
self.tracking = False
- self.interface = []
+ self.xmlrpcinterface = []
+ self.server_timeout = None
self.writeeventlog = False
self.server_only = False
self.limited_deps = False
@@ -157,7 +152,6 @@
if key in parameters.options.__dict__:
setattr(self, key, parameters.options.__dict__[key])
self.env = parameters.environment.copy()
- self.tracking = parameters.tracking
def setServerRegIdleCallback(self, srcb):
self.server_register_idlecallback = srcb
@@ -173,7 +167,7 @@
def __setstate__(self,state):
for k in state:
- setattr(self, k, state[k])
+ setattr(self, k, state[k])
def catch_parse_error(func):
@@ -230,6 +224,27 @@
return None
+#
+# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working
+# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
+#
+
+def findTopdir():
+ d = bb.data.init()
+ bbpath = None
+ if 'BBPATH' in os.environ:
+ bbpath = os.environ['BBPATH']
+ d.setVar('BBPATH', bbpath)
+
+ layerconf = findConfigFile("bblayers.conf", d)
+ if layerconf:
+ return os.path.dirname(os.path.dirname(layerconf))
+ if bbpath:
+ bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
+ if bitbakeconf:
+ return os.path.dirname(os.path.dirname(bitbakeconf))
+ return None
+
class CookerDataBuilder(object):
def __init__(self, cookercfg, worker = False):
@@ -255,7 +270,7 @@
filtered_keys = bb.utils.approved_variables()
bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
self.basedata.setVar("BB_ORIGENV", self.savedenv)
-
+
if worker:
self.basedata.setVar("BB_WORKERCONTEXT", "1")
@@ -294,6 +309,8 @@
mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
bb.event.fire(bb.event.ConfigParsed(), mcdata)
self.mcdata[config] = mcdata
+ if multiconfig:
+ bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
except (SyntaxError, bb.BBHandledException):
raise bb.BBHandledException
@@ -304,6 +321,18 @@
logger.exception("Error parsing configuration files")
raise bb.BBHandledException
+ # Create a copy so we can reset at a later date when UIs disconnect
+ self.origdata = self.data
+ self.data = bb.data.createCopy(self.origdata)
+ self.mcdata[''] = self.data
+
+ def reset(self):
+ # We may not have run parseBaseConfiguration() yet
+ if not hasattr(self, 'origdata'):
+ return
+ self.data = bb.data.createCopy(self.origdata)
+ self.mcdata[''] = self.data
+
def _findLayerConf(self, data):
return findConfigFile("bblayers.conf", data)
@@ -346,6 +375,27 @@
data.delVar('LAYERDIR_RE')
data.delVar('LAYERDIR')
+ bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
+ collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
+ invalid = []
+ for entry in bbfiles_dynamic:
+ parts = entry.split(":", 1)
+ if len(parts) != 2:
+ invalid.append(entry)
+ continue
+ l, f = parts
+ if l in collections:
+ data.appendVar("BBFILES", " " + f)
+ if invalid:
+ bb.fatal("BBFILES_DYNAMIC entries must be of the form <collection name>:<filename pattern>, not:\n %s" % "\n ".join(invalid))
+
+ layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
+ for c in collections:
+ compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
+ if compat and not (compat & layerseries):
+ bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
+ % (c, " ".join(layerseries), " ".join(compat)))
+
if not data.getVar("BBPATH"):
msg = "The BBPATH variable is not set"
if not layerconf:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py b/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py
index ab4a954..8300d1d 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py
@@ -1,48 +1,14 @@
"""
Python Daemonizing helper
-Configurable daemon behaviors:
-
- 1.) The current working directory set to the "/" directory.
- 2.) The current file creation mode mask set to 0.
- 3.) Close all open files (1024).
- 4.) Redirect standard I/O streams to "/dev/null".
-
-A failed call to fork() now raises an exception.
-
-References:
- 1) Advanced Programming in the Unix Environment: W. Richard Stevens
- http://www.apuebook.com/apue3e.html
- 2) The Linux Programming Interface: Michael Kerrisk
- http://man7.org/tlpi/index.html
- 3) Unix Programming Frequently Asked Questions:
- http://www.faqs.org/faqs/unix-faq/programmer/faq/
-
-Modified to allow a function to be daemonized and return for
-bitbake use by Richard Purdie
+Originally based on code Copyright (C) 2005 Chad J. Schroeder but now heavily modified
+to allow a function to be daemonized and return for bitbake use by Richard Purdie
"""
-__author__ = "Chad J. Schroeder"
-__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
-__version__ = "0.2"
-
-# Standard Python modules.
-import os # Miscellaneous OS interfaces.
-import sys # System-specific parameters and functions.
-
-# Default daemon parameters.
-# File mode creation mask of the daemon.
-# For BitBake's children, we do want to inherit the parent umask.
-UMASK = None
-
-# Default maximum for the number of available file descriptors.
-MAXFD = 1024
-
-# The standard I/O file descriptors are redirected to /dev/null by default.
-if (hasattr(os, "devnull")):
- REDIRECT_TO = os.devnull
-else:
- REDIRECT_TO = "/dev/null"
+import os
+import sys
+import io
+import traceback
def createDaemon(function, logfile):
"""
@@ -65,36 +31,6 @@
# leader of the new process group, we call os.setsid(). The process is
# also guaranteed not to have a controlling terminal.
os.setsid()
-
- # Is ignoring SIGHUP necessary?
- #
- # It's often suggested that the SIGHUP signal should be ignored before
- # the second fork to avoid premature termination of the process. The
- # reason is that when the first child terminates, all processes, e.g.
- # the second child, in the orphaned group will be sent a SIGHUP.
- #
- # "However, as part of the session management system, there are exactly
- # two cases where SIGHUP is sent on the death of a process:
- #
- # 1) When the process that dies is the session leader of a session that
- # is attached to a terminal device, SIGHUP is sent to all processes
- # in the foreground process group of that terminal device.
- # 2) When the death of a process causes a process group to become
- # orphaned, and one or more processes in the orphaned group are
- # stopped, then SIGHUP and SIGCONT are sent to all members of the
- # orphaned group." [2]
- #
- # The first case can be ignored since the child is guaranteed not to have
- # a controlling terminal. The second case isn't so easy to dismiss.
- # The process group is orphaned when the first child terminates and
- # POSIX.1 requires that every STOPPED process in an orphaned process
- # group be sent a SIGHUP signal followed by a SIGCONT signal. Since the
- # second child is not STOPPED though, we can safely forego ignoring the
- # SIGHUP signal. In any case, there are no ill-effects if it is ignored.
- #
- # import signal # Set handlers for asynchronous events.
- # signal.signal(signal.SIGHUP, signal.SIG_IGN)
-
try:
# Fork a second child and exit immediately to prevent zombies. This
# causes the second child process to be orphaned, making the init
@@ -108,86 +44,39 @@
except OSError as e:
raise Exception("%s [%d]" % (e.strerror, e.errno))
- if (pid == 0): # The second child.
- # We probably don't want the file mode creation mask inherited from
- # the parent, so we give the child complete control over permissions.
- if UMASK is not None:
- os.umask(UMASK)
- else:
+ if (pid != 0):
# Parent (the first child) of the second child.
+ # exit() or _exit()?
+ # _exit is like exit(), but it doesn't call any functions registered
+ # with atexit (and on_exit) or any registered signal handlers. It also
+ # closes any open file descriptors. Using exit() may cause all stdio
+ # streams to be flushed twice and any temporary files may be unexpectedly
+ # removed. It's therefore recommended that child branches of a fork()
+ # and the parent branch(es) of a daemon use _exit().
os._exit(0)
else:
- # exit() or _exit()?
- # _exit is like exit(), but it doesn't call any functions registered
- # with atexit (and on_exit) or any registered signal handlers. It also
- # closes any open file descriptors. Using exit() may cause all stdio
- # streams to be flushed twice and any temporary files may be unexpectedly
- # removed. It's therefore recommended that child branches of a fork()
- # and the parent branch(es) of a daemon use _exit().
+ os.waitpid(pid, 0)
return
- # Close all open file descriptors. This prevents the child from keeping
- # open any file descriptors inherited from the parent. There is a variety
- # of methods to accomplish this task. Three are listed below.
- #
- # Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
- # number of open file descriptors to close. If it doesn't exist, use
- # the default value (configurable).
- #
- # try:
- # maxfd = os.sysconf("SC_OPEN_MAX")
- # except (AttributeError, ValueError):
- # maxfd = MAXFD
- #
- # OR
- #
- # if (os.sysconf_names.has_key("SC_OPEN_MAX")):
- # maxfd = os.sysconf("SC_OPEN_MAX")
- # else:
- # maxfd = MAXFD
- #
- # OR
- #
- # Use the getrlimit method to retrieve the maximum file descriptor number
- # that can be opened by this process. If there is no limit on the
- # resource, use the default value.
- #
- import resource # Resource usage information.
- maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
- if (maxfd == resource.RLIM_INFINITY):
- maxfd = MAXFD
-
- # Iterate through and close all file descriptors.
-# for fd in range(0, maxfd):
-# try:
-# os.close(fd)
-# except OSError: # ERROR, fd wasn't open to begin with (ignored)
-# pass
+ # The second child.
- # Redirect the standard I/O file descriptors to the specified file. Since
- # the daemon has no controlling terminal, most daemons redirect stdin,
- # stdout, and stderr to /dev/null. This is done to prevent side-effects
- # from reads and writes to the standard I/O file descriptors.
-
- # This call to open is guaranteed to return the lowest file descriptor,
- # which will be 0 (stdin), since it was closed above.
-# os.open(REDIRECT_TO, os.O_RDWR) # standard input (0)
-
- # Duplicate standard input to standard output and standard error.
-# os.dup2(0, 1) # standard output (1)
-# os.dup2(0, 2) # standard error (2)
-
-
+ # Replace standard fds with our own
si = open('/dev/null', 'r')
- so = open(logfile, 'w')
- se = so
-
-
- # Replace those fds with our own
os.dup2(si.fileno(), sys.stdin.fileno())
- os.dup2(so.fileno(), sys.stdout.fileno())
- os.dup2(se.fileno(), sys.stderr.fileno())
- function()
+ try:
+ so = open(logfile, 'a+')
+ se = so
+ os.dup2(so.fileno(), sys.stdout.fileno())
+ os.dup2(se.fileno(), sys.stderr.fileno())
+ except io.UnsupportedOperation:
+ sys.stdout = open(logfile, 'a+')
+ sys.stderr = sys.stdout
- os._exit(0)
+ try:
+ function()
+ except Exception as e:
+ traceback.print_exc()
+ finally:
+ bb.event.print_ui_queue()
+ os._exit(0)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/data.py b/import-layers/yocto-poky/bitbake/lib/bb/data.py
index 134afaa..80a7879 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/data.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/data.py
@@ -290,7 +290,7 @@
return deps, value
varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
vardeps = varflags.get("vardeps")
- value = d.getVar(key, False)
+ value = d.getVarFlag(key, "_content", False)
def handle_contains(value, contains, d):
newvalue = ""
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py b/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py
index 7dc1c68..7b09af5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py
@@ -39,7 +39,7 @@
logger = logging.getLogger("BitBake.Data")
__setvar_keyword__ = ["_append", "_prepend", "_remove"]
-__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>.*))?$')
+__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
__expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
__expand_python_regexp__ = re.compile(r"\${@.+?}")
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/event.py b/import-layers/yocto-poky/bitbake/lib/bb/event.py
index 6d8493b..52072b5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/event.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/event.py
@@ -149,23 +149,34 @@
# First check to see if we have any proper messages
msgprint = False
+ msgerrs = False
+
+ # Should we print to stderr?
+ for event in ui_queue[:]:
+ if isinstance(event, logging.LogRecord) and event.levelno >= logging.WARNING:
+ msgerrs = True
+ break
+
+ if msgerrs:
+ logger.addHandler(stderr)
+ else:
+ logger.addHandler(stdout)
+
for event in ui_queue[:]:
if isinstance(event, logging.LogRecord):
if event.levelno > logging.DEBUG:
- if event.levelno >= logging.WARNING:
- logger.addHandler(stderr)
- else:
- logger.addHandler(stdout)
logger.handle(event)
msgprint = True
- if msgprint:
- return
# Nope, so just print all of the messages we have (including debug messages)
- logger.addHandler(stdout)
- for event in ui_queue[:]:
- if isinstance(event, logging.LogRecord):
- logger.handle(event)
+ if not msgprint:
+ for event in ui_queue[:]:
+ if isinstance(event, logging.LogRecord):
+ logger.handle(event)
+ if msgerrs:
+ logger.removeHandler(stderr)
+ else:
+ logger.removeHandler(stdout)
def fire_ui_handlers(event, d):
global _thread_lock
@@ -212,6 +223,12 @@
if worker_fire:
worker_fire(event, d)
else:
+ # If messages have been queued up, clear the queue
+ global _uiready, ui_queue
+ if _uiready and ui_queue:
+ for queue_event in ui_queue:
+ fire_ui_handlers(queue_event, d)
+ ui_queue = []
fire_ui_handlers(event, d)
def fire_from_worker(event, d):
@@ -264,6 +281,11 @@
def remove(name, handler):
"""Remove an Event handler"""
_handlers.pop(name)
+ if name in _catchall_handlers:
+ _catchall_handlers.pop(name)
+ for event in _event_handler_map.keys():
+ if name in _event_handler_map[event]:
+ _event_handler_map[event].pop(name)
def get_handlers():
return _handlers
@@ -277,20 +299,28 @@
_eventfilter = func
def register_UIHhandler(handler, mainui=False):
- if mainui:
- global _uiready
- _uiready = True
bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
_ui_handlers[_ui_handler_seq] = handler
level, debug_domains = bb.msg.constructLogOptions()
_ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
+ if mainui:
+ global _uiready
+ _uiready = _ui_handler_seq
return _ui_handler_seq
-def unregister_UIHhandler(handlerNum):
+def unregister_UIHhandler(handlerNum, mainui=False):
+ if mainui:
+ global _uiready
+ _uiready = False
if handlerNum in _ui_handlers:
del _ui_handlers[handlerNum]
return
+def get_uihandler():
+ if _uiready is False:
+ return None
+ return _uiready
+
# Class to allow filtering of events and specific filtering of LogRecords *before* we put them over the IPC
class UIEventFilter(object):
def __init__(self, level, debug_domains):
@@ -353,6 +383,12 @@
class ConfigParsed(Event):
"""Configuration Parsing Complete"""
+class MultiConfigParsed(Event):
+ """Multi-Config Parsing Complete"""
+ def __init__(self, mcdata):
+ self.mcdata = mcdata
+ Event.__init__(self)
+
class RecipeEvent(Event):
def __init__(self, fn):
self.fn = fn
@@ -496,6 +532,28 @@
def isRuntime(self):
return self._runtime
+ def __str__(self):
+ msg = ''
+ if self._runtime:
+ r = "R"
+ else:
+ r = ""
+
+ extra = ''
+ if not self._reasons:
+ if self._close_matches:
+ extra = ". Close matches:\n %s" % '\n '.join(self._close_matches)
+
+ if self._dependees:
+ msg = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s" % (r, self._item, ", ".join(self._dependees), r, extra)
+ else:
+ msg = "Nothing %sPROVIDES '%s'%s" % (r, self._item, extra)
+ if self._reasons:
+ for reason in self._reasons:
+ msg += '\n' + reason
+ return msg
+
+
class MultipleProviders(Event):
"""Multiple Providers"""
@@ -523,6 +581,16 @@
"""
return self._candidates
+ def __str__(self):
+ msg = "Multiple providers are available for %s%s (%s)" % (self._is_runtime and "runtime " or "",
+ self._item,
+ ", ".join(self._candidates))
+ rtime = ""
+ if self._is_runtime:
+ rtime = "R"
+ msg += "\nConsider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, self._item)
+ return msg
+
class ParseStarted(OperationStarted):
"""Recipe parsing for the runqueue has begun"""
def __init__(self, total):
@@ -616,14 +684,6 @@
self._pattern = pattern
self._matches = matches
-class CoreBaseFilesFound(Event):
- """
- Event when a list of appropriate config files has been generated
- """
- def __init__(self, paths):
- Event.__init__(self)
- self._paths = paths
-
class ConfigFilesFound(Event):
"""
Event when a list of appropriate config files has been generated
@@ -694,19 +754,6 @@
record.taskpid = worker_pid
return True
-class RequestPackageInfo(Event):
- """
- Event to request package information
- """
-
-class PackageInfo(Event):
- """
- Package information for GUI
- """
- def __init__(self, pkginfolist):
- Event.__init__(self)
- self._pkginfolist = pkginfolist
-
class MetadataEvent(Event):
"""
Generic event that target for OE-Core classes
@@ -784,3 +831,10 @@
Event to indicate network test has failed
"""
+class FindSigInfoResult(Event):
+ """
+ Event to return results from findSigInfo command
+ """
+ def __init__(self, result):
+ Event.__init__(self)
+ self.result = result
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py
index b853da3..f70f1b5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py
@@ -39,6 +39,7 @@
import bb.persist_data, bb.utils
import bb.checksum
import bb.process
+import bb.event
__version__ = "2"
_checksum_cache = bb.checksum.FileChecksumCache()
@@ -48,11 +49,11 @@
class BBFetchException(Exception):
"""Class all fetch exceptions inherit from"""
def __init__(self, message):
- self.msg = message
- Exception.__init__(self, message)
+ self.msg = message
+ Exception.__init__(self, message)
def __str__(self):
- return self.msg
+ return self.msg
class UntrustedUrl(BBFetchException):
"""Exception raised when encountering a host not listed in BB_ALLOWED_NETWORKS"""
@@ -68,24 +69,24 @@
class MalformedUrl(BBFetchException):
"""Exception raised when encountering an invalid url"""
def __init__(self, url, message=''):
- if message:
- msg = message
- else:
- msg = "The URL: '%s' is invalid and cannot be interpreted" % url
- self.url = url
- BBFetchException.__init__(self, msg)
- self.args = (url,)
+ if message:
+ msg = message
+ else:
+ msg = "The URL: '%s' is invalid and cannot be interpreted" % url
+ self.url = url
+ BBFetchException.__init__(self, msg)
+ self.args = (url,)
class FetchError(BBFetchException):
"""General fetcher exception when something happens incorrectly"""
def __init__(self, message, url = None):
- if url:
+ if url:
msg = "Fetcher failure for URL: '%s'. %s" % (url, message)
- else:
+ else:
msg = "Fetcher failure: %s" % message
- self.url = url
- BBFetchException.__init__(self, msg)
- self.args = (message, url)
+ self.url = url
+ BBFetchException.__init__(self, msg)
+ self.args = (message, url)
class ChecksumError(FetchError):
"""Exception when mismatched checksum encountered"""
@@ -99,49 +100,56 @@
class UnpackError(BBFetchException):
"""General fetcher exception when something happens incorrectly when unpacking"""
def __init__(self, message, url):
- msg = "Unpack failure for URL: '%s'. %s" % (url, message)
- self.url = url
- BBFetchException.__init__(self, msg)
- self.args = (message, url)
+ msg = "Unpack failure for URL: '%s'. %s" % (url, message)
+ self.url = url
+ BBFetchException.__init__(self, msg)
+ self.args = (message, url)
class NoMethodError(BBFetchException):
"""Exception raised when there is no method to obtain a supplied url or set of urls"""
def __init__(self, url):
- msg = "Could not find a fetcher which supports the URL: '%s'" % url
- self.url = url
- BBFetchException.__init__(self, msg)
- self.args = (url,)
+ msg = "Could not find a fetcher which supports the URL: '%s'" % url
+ self.url = url
+ BBFetchException.__init__(self, msg)
+ self.args = (url,)
class MissingParameterError(BBFetchException):
"""Exception raised when a fetch method is missing a critical parameter in the url"""
def __init__(self, missing, url):
- msg = "URL: '%s' is missing the required parameter '%s'" % (url, missing)
- self.url = url
- self.missing = missing
- BBFetchException.__init__(self, msg)
- self.args = (missing, url)
+ msg = "URL: '%s' is missing the required parameter '%s'" % (url, missing)
+ self.url = url
+ self.missing = missing
+ BBFetchException.__init__(self, msg)
+ self.args = (missing, url)
class ParameterError(BBFetchException):
"""Exception raised when a url cannot be proccessed due to invalid parameters."""
def __init__(self, message, url):
- msg = "URL: '%s' has invalid parameters. %s" % (url, message)
- self.url = url
- BBFetchException.__init__(self, msg)
- self.args = (message, url)
+ msg = "URL: '%s' has invalid parameters. %s" % (url, message)
+ self.url = url
+ BBFetchException.__init__(self, msg)
+ self.args = (message, url)
class NetworkAccess(BBFetchException):
"""Exception raised when network access is disabled but it is required."""
def __init__(self, url, cmd):
- msg = "Network access disabled through BB_NO_NETWORK (or set indirectly due to use of BB_FETCH_PREMIRRORONLY) but access requested with command %s (for url %s)" % (cmd, url)
- self.url = url
- self.cmd = cmd
- BBFetchException.__init__(self, msg)
- self.args = (url, cmd)
+ msg = "Network access disabled through BB_NO_NETWORK (or set indirectly due to use of BB_FETCH_PREMIRRORONLY) but access requested with command %s (for url %s)" % (cmd, url)
+ self.url = url
+ self.cmd = cmd
+ BBFetchException.__init__(self, msg)
+ self.args = (url, cmd)
class NonLocalMethod(Exception):
def __init__(self):
Exception.__init__(self)
+class MissingChecksumEvent(bb.event.Event):
+ def __init__(self, url, md5sum, sha256sum):
+ self.url = url
+ self.checksums = {'md5sum': md5sum,
+ 'sha256sum': sha256sum}
+ bb.event.Event.__init__(self)
+
class URI(object):
"""
@@ -403,8 +411,6 @@
type, host, path, user, pswd, p = decoded
- if not path:
- raise MissingParameterError('path', "encoded from the data %s" % str(decoded))
if not type:
raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
url = '%s://' % type
@@ -415,17 +421,18 @@
url += "@"
if host and type != "file":
url += "%s" % host
- # Standardise path to ensure comparisons work
- while '//' in path:
- path = path.replace("//", "/")
- url += "%s" % urllib.parse.quote(path)
+ if path:
+ # Standardise path to ensure comparisons work
+ while '//' in path:
+ path = path.replace("//", "/")
+ url += "%s" % urllib.parse.quote(path)
if p:
for parm in p:
url += ";%s=%s" % (parm, p[parm])
return url
-def uri_replace(ud, uri_find, uri_replace, replacements, d):
+def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
if not ud.url or not uri_find or not uri_replace:
logger.error("uri_replace: passed an undefined value, not replacing")
return None
@@ -455,7 +462,7 @@
result_decoded[loc][k] = uri_replace_decoded[loc][k]
elif (re.match(regexp, uri_decoded[loc])):
if not uri_replace_decoded[loc]:
- result_decoded[loc] = ""
+ result_decoded[loc] = ""
else:
for k in replacements:
uri_replace_decoded[loc] = uri_replace_decoded[loc].replace(k, replacements[k])
@@ -464,9 +471,9 @@
if loc == 2:
# Handle path manipulations
basename = None
- if uri_decoded[0] != uri_replace_decoded[0] and ud.mirrortarball:
+ if uri_decoded[0] != uri_replace_decoded[0] and mirrortarball:
# If the source and destination url types differ, must be a mirrortarball mapping
- basename = os.path.basename(ud.mirrortarball)
+ basename = os.path.basename(mirrortarball)
# Kill parameters, they make no sense for mirror tarballs
uri_decoded[5] = {}
elif ud.localpath and ud.method.supports_checksum(ud):
@@ -584,6 +591,14 @@
ud.sha256_name, sha256data))
raise NoChecksumError('Missing SRC_URI checksum', ud.url)
+ bb.event.fire(MissingChecksumEvent(ud.url, md5data, sha256data), d)
+
+ if strict == "ignore":
+ return {
+ _MD5_KEY: md5data,
+ _SHA256_KEY: sha256data
+ }
+
# Log missing sums so user can more easily add them
logger.warning('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
'SRC_URI[%s] = "%s"',
@@ -733,7 +748,7 @@
In the multi SCM case, we build a value based on SRCREV_FORMAT which must
have been set.
- The idea here is that we put the string "AUTOINC+" into return value if the revisions are not
+ The idea here is that we put the string "AUTOINC+" into return value if the revisions are not
incremental, other code is then responsible for turning that into an increasing value (if needed)
A method_name can be supplied to retrieve an alternatively formatted revision from a fetcher, if
@@ -785,7 +800,7 @@
format = re.sub(name_to_rev_re, lambda match: name_to_rev[match.group(0)], format)
if seenautoinc:
- format = "AUTOINC+" + format
+ format = "AUTOINC+" + format
return format
@@ -892,45 +907,47 @@
replacements["BASENAME"] = origud.path.split("/")[-1]
replacements["MIRRORNAME"] = origud.host.replace(':','.') + origud.path.replace('/', '.').replace('*', '.')
- def adduri(ud, uris, uds, mirrors):
+ def adduri(ud, uris, uds, mirrors, tarballs):
for line in mirrors:
try:
(find, replace) = line
except ValueError:
continue
- newuri = uri_replace(ud, find, replace, replacements, ld)
- if not newuri or newuri in uris or newuri == origud.url:
- continue
- if not trusted_network(ld, newuri):
- logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
- continue
+ for tarball in tarballs:
+ newuri = uri_replace(ud, find, replace, replacements, ld, tarball)
+ if not newuri or newuri in uris or newuri == origud.url:
+ continue
- # Create a local copy of the mirrors minus the current line
- # this will prevent us from recursively processing the same line
- # as well as indirect recursion A -> B -> C -> A
- localmirrors = list(mirrors)
- localmirrors.remove(line)
+ if not trusted_network(ld, newuri):
+ logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" % (newuri))
+ continue
- try:
- newud = FetchData(newuri, ld)
- newud.setup_localpath(ld)
- except bb.fetch2.BBFetchException as e:
- logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
- logger.debug(1, str(e))
+ # Create a local copy of the mirrors minus the current line
+ # this will prevent us from recursively processing the same line
+ # as well as indirect recursion A -> B -> C -> A
+ localmirrors = list(mirrors)
+ localmirrors.remove(line)
+
try:
- # setup_localpath of file:// urls may fail, we should still see
- # if mirrors of the url exist
- adduri(newud, uris, uds, localmirrors)
- except UnboundLocalError:
- pass
- continue
- uris.append(newuri)
- uds.append(newud)
+ newud = FetchData(newuri, ld)
+ newud.setup_localpath(ld)
+ except bb.fetch2.BBFetchException as e:
+ logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
+ logger.debug(1, str(e))
+ try:
+ # setup_localpath of file:// urls may fail, we should still see
+ # if mirrors of the url exist
+ adduri(newud, uris, uds, localmirrors, tarballs)
+ except UnboundLocalError:
+ pass
+ continue
+ uris.append(newuri)
+ uds.append(newud)
- adduri(newud, uris, uds, localmirrors)
+ adduri(newud, uris, uds, localmirrors, tarballs)
- adduri(origud, uris, uds, mirrors)
+ adduri(origud, uris, uds, mirrors, origud.mirrortarballs or [None])
return uris, uds
@@ -975,8 +992,8 @@
# We may be obtaining a mirror tarball which needs further processing by the real fetcher
# If that tarball is a local file:// we need to provide a symlink to it
dldir = ld.getVar("DL_DIR")
- if origud.mirrortarball and os.path.basename(ud.localpath) == os.path.basename(origud.mirrortarball) \
- and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
+
+ if origud.mirrortarballs and os.path.basename(ud.localpath) in origud.mirrortarballs and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
# Create donestamp in old format to avoid triggering a re-download
if ud.donestamp:
bb.utils.mkdirhier(os.path.dirname(ud.donestamp))
@@ -993,7 +1010,7 @@
pass
if not verify_donestamp(origud, ld) or origud.method.need_update(origud, ld):
origud.method.download(origud, ld)
- if hasattr(origud.method,"build_mirror_data"):
+ if hasattr(origud.method, "build_mirror_data"):
origud.method.build_mirror_data(origud, ld)
return origud.localpath
# Otherwise the result is a local file:// and we symlink to it
@@ -1015,7 +1032,7 @@
except IOError as e:
if e.errno in [os.errno.ESTALE]:
- logger.warn("Stale Error Observed %s." % ud.url)
+ logger.warning("Stale Error Observed %s." % ud.url)
return False
raise
@@ -1115,7 +1132,7 @@
attempts.append("SRCREV")
for a in attempts:
- srcrev = d.getVar(a)
+ srcrev = d.getVar(a)
if srcrev and srcrev != "INVALID":
break
@@ -1130,7 +1147,7 @@
if srcrev == "INVALID" or not srcrev:
return parmrev
if srcrev != parmrev:
- raise FetchError("Conflicting revisions (%s from SRCREV and %s from the url) found, please spcify one valid value" % (srcrev, parmrev))
+ raise FetchError("Conflicting revisions (%s from SRCREV and %s from the url) found, please specify one valid value" % (srcrev, parmrev))
return parmrev
if srcrev == "INVALID" or not srcrev:
@@ -1190,7 +1207,7 @@
self.localfile = ""
self.localpath = None
self.lockfile = None
- self.mirrortarball = None
+ self.mirrortarballs = []
self.basename = None
self.basepath = None
(self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(d.expand(url))
@@ -1228,7 +1245,7 @@
for m in methods:
if m.supports(self, d):
self.method = m
- break
+ break
if not self.method:
raise NoMethodError(url)
@@ -1263,7 +1280,7 @@
elif self.basepath or self.basename:
basepath = dldir + os.sep + (self.basepath or self.basename)
else:
- bb.fatal("Can't determine lock path for url %s" % url)
+ bb.fatal("Can't determine lock path for url %s" % url)
self.donestamp = basepath + '.done'
self.lockfile = basepath + '.lock'
@@ -1326,13 +1343,13 @@
if os.path.isdir(urldata.localpath) == True:
return False
if urldata.localpath.find("*") != -1:
- return False
+ return False
return True
def recommends_checksum(self, urldata):
"""
- Is the backend on where checksumming is recommended (should warnings
+ Is the backend on where checksumming is recommended (should warnings
be displayed if there is no checksum)?
"""
return False
@@ -1542,6 +1559,14 @@
key = self._revision_key(ud, d, name)
return "%s-%s" % (key, d.getVar("PN") or "")
+ def latest_versionstring(self, ud, d):
+ """
+ Compute the latest release name like "x.y.x" in "x.y.x+gitHASH"
+ by searching through the tags output of ls-remote, comparing
+ versions and returning the highest match as a (version, revision) pair.
+ """
+ return ('', '')
+
class Fetch(object):
def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
if localonly and cache:
@@ -1612,7 +1637,7 @@
try:
self.d.setVar("BB_NO_NETWORK", network)
-
+
if verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
localpath = ud.localpath
elif m.try_premirror(ud, self.d):
@@ -1708,9 +1733,8 @@
ret = try_mirrors(self, self.d, ud, mirrors, True)
if not ret:
# Next try checking from the original uri, u
- try:
- ret = m.checkstatus(self, ud, self.d)
- except:
+ ret = m.checkstatus(self, ud, self.d)
+ if not ret:
# Finally, try checking uri, u, from MIRRORS
mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
ret = try_mirrors(self, self.d, ud, mirrors, True)
@@ -1720,7 +1744,7 @@
def unpack(self, root, urls=None):
"""
- Check all urls exist upstream
+ Unpack urls to root
"""
if not urls:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py
index 7442f84..5ef8cd6 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py
@@ -70,11 +70,14 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+import collections
import errno
+import fnmatch
import os
import re
+import subprocess
+import tempfile
import bb
-import errno
import bb.progress
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
@@ -172,18 +175,66 @@
branches = ud.parm.get("branch", "master").split(',')
if len(branches) != len(ud.names):
raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
+
+ ud.cloneflags = "-s -n"
+ if ud.bareclone:
+ ud.cloneflags += " --mirror"
+
+ ud.shallow = d.getVar("BB_GIT_SHALLOW") == "1"
+ ud.shallow_extra_refs = (d.getVar("BB_GIT_SHALLOW_EXTRA_REFS") or "").split()
+
+ depth_default = d.getVar("BB_GIT_SHALLOW_DEPTH")
+ if depth_default is not None:
+ try:
+ depth_default = int(depth_default or 0)
+ except ValueError:
+ raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
+ else:
+ if depth_default < 0:
+ raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
+ else:
+ depth_default = 1
+ ud.shallow_depths = collections.defaultdict(lambda: depth_default)
+
+ revs_default = d.getVar("BB_GIT_SHALLOW_REVS", True)
+ ud.shallow_revs = []
ud.branches = {}
for pos, name in enumerate(ud.names):
branch = branches[pos]
ud.branches[name] = branch
ud.unresolvedrev[name] = branch
+ shallow_depth = d.getVar("BB_GIT_SHALLOW_DEPTH_%s" % name)
+ if shallow_depth is not None:
+ try:
+ shallow_depth = int(shallow_depth or 0)
+ except ValueError:
+ raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
+ else:
+ if shallow_depth < 0:
+ raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
+ ud.shallow_depths[name] = shallow_depth
+
+ revs = d.getVar("BB_GIT_SHALLOW_REVS_%s" % name)
+ if revs is not None:
+ ud.shallow_revs.extend(revs.split())
+ elif revs_default is not None:
+ ud.shallow_revs.extend(revs_default.split())
+
+ if (ud.shallow and
+ not ud.shallow_revs and
+ all(ud.shallow_depths[n] == 0 for n in ud.names)):
+ # Shallow disabled for this URL
+ ud.shallow = False
+
if ud.usehead:
ud.unresolvedrev['default'] = 'HEAD'
ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
- ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0") or ud.rebaseable
+ write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
+ ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
+ ud.write_shallow_tarballs = (d.getVar("BB_GENERATE_SHALLOW_TARBALLS") or write_tarballs) != "0"
ud.setup_revisions(d)
@@ -205,13 +256,42 @@
if ud.rebaseable:
for name in ud.names:
gitsrcname = gitsrcname + '_' + ud.revisions[name]
- ud.mirrortarball = 'git2_%s.tar.gz' % gitsrcname
- ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
- gitdir = d.getVar("GITDIR") or (d.getVar("DL_DIR") + "/git2/")
- ud.clonedir = os.path.join(gitdir, gitsrcname)
+ dl_dir = d.getVar("DL_DIR")
+ gitdir = d.getVar("GITDIR") or (dl_dir + "/git2/")
+ ud.clonedir = os.path.join(gitdir, gitsrcname)
ud.localfile = ud.clonedir
+ mirrortarball = 'git2_%s.tar.gz' % gitsrcname
+ ud.fullmirror = os.path.join(dl_dir, mirrortarball)
+ ud.mirrortarballs = [mirrortarball]
+ if ud.shallow:
+ tarballname = gitsrcname
+ if ud.bareclone:
+ tarballname = "%s_bare" % tarballname
+
+ if ud.shallow_revs:
+ tarballname = "%s_%s" % (tarballname, "_".join(sorted(ud.shallow_revs)))
+
+ for name, revision in sorted(ud.revisions.items()):
+ tarballname = "%s_%s" % (tarballname, ud.revisions[name][:7])
+ depth = ud.shallow_depths[name]
+ if depth:
+ tarballname = "%s-%s" % (tarballname, depth)
+
+ shallow_refs = []
+ if not ud.nobranch:
+ shallow_refs.extend(ud.branches.values())
+ if ud.shallow_extra_refs:
+ shallow_refs.extend(r.replace('refs/heads/', '').replace('*', 'ALL') for r in ud.shallow_extra_refs)
+ if shallow_refs:
+ tarballname = "%s_%s" % (tarballname, "_".join(sorted(shallow_refs)).replace('/', '.'))
+
+ fetcher = self.__class__.__name__.lower()
+ ud.shallowtarball = '%sshallow_%s.tar.gz' % (fetcher, tarballname)
+ ud.fullshallow = os.path.join(dl_dir, ud.shallowtarball)
+ ud.mirrortarballs.insert(0, ud.shallowtarball)
+
def localpath(self, ud, d):
return ud.clonedir
@@ -221,6 +301,8 @@
for name in ud.names:
if not self._contains_ref(ud, d, name, ud.clonedir):
return True
+ if ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow):
+ return True
if ud.write_tarballs and not os.path.exists(ud.fullmirror):
return True
return False
@@ -237,8 +319,16 @@
def download(self, ud, d):
"""Fetch url"""
- # If the checkout doesn't exist and the mirror tarball does, extract it
- if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
+ no_clone = not os.path.exists(ud.clonedir)
+ need_update = no_clone or self.need_update(ud, d)
+
+ # A current clone is preferred to either tarball, a shallow tarball is
+ # preferred to an out of date clone, and a missing clone will use
+ # either tarball.
+ if ud.shallow and os.path.exists(ud.fullshallow) and need_update:
+ ud.localpath = ud.fullshallow
+ return
+ elif os.path.exists(ud.fullmirror) and no_clone:
bb.utils.mkdirhier(ud.clonedir)
runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
@@ -284,9 +374,21 @@
raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
def build_mirror_data(self, ud, d):
- # Generate a mirror tarball if needed
- if ud.write_tarballs and not os.path.exists(ud.fullmirror):
- # it's possible that this symlink points to read-only filesystem with PREMIRROR
+ if ud.shallow and ud.write_shallow_tarballs:
+ if not os.path.exists(ud.fullshallow):
+ if os.path.islink(ud.fullshallow):
+ os.unlink(ud.fullshallow)
+ tempdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
+ shallowclone = os.path.join(tempdir, 'git')
+ try:
+ self.clone_shallow_local(ud, shallowclone, d)
+
+ logger.info("Creating tarball of git repository")
+ runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
+ runfetchcmd("touch %s.done" % ud.fullshallow, d)
+ finally:
+ bb.utils.remove(tempdir, recurse=True)
+ elif ud.write_tarballs and not os.path.exists(ud.fullmirror):
if os.path.islink(ud.fullmirror):
os.unlink(ud.fullmirror)
@@ -294,6 +396,62 @@
runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
runfetchcmd("touch %s.done" % ud.fullmirror, d)
+ def clone_shallow_local(self, ud, dest, d):
+ """Clone the repo and make it shallow.
+
+ The upstream url of the new clone isn't set at this time, as it'll be
+ set correctly when unpacked."""
+ runfetchcmd("%s clone %s %s %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, dest), d)
+
+ to_parse, shallow_branches = [], []
+ for name in ud.names:
+ revision = ud.revisions[name]
+ depth = ud.shallow_depths[name]
+ if depth:
+ to_parse.append('%s~%d^{}' % (revision, depth - 1))
+
+ # For nobranch, we need a ref, otherwise the commits will be
+ # removed, and for non-nobranch, we truncate the branch to our
+ # srcrev, to avoid keeping unnecessary history beyond that.
+ branch = ud.branches[name]
+ if ud.nobranch:
+ ref = "refs/shallow/%s" % name
+ elif ud.bareclone:
+ ref = "refs/heads/%s" % branch
+ else:
+ ref = "refs/remotes/origin/%s" % branch
+
+ shallow_branches.append(ref)
+ runfetchcmd("%s update-ref %s %s" % (ud.basecmd, ref, revision), d, workdir=dest)
+
+ # Map srcrev+depths to revisions
+ parsed_depths = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join(to_parse)), d, workdir=dest)
+
+ # Resolve specified revisions
+ parsed_revs = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join('"%s^{}"' % r for r in ud.shallow_revs)), d, workdir=dest)
+ shallow_revisions = parsed_depths.splitlines() + parsed_revs.splitlines()
+
+ # Apply extra ref wildcards
+ all_refs = runfetchcmd('%s for-each-ref "--format=%%(refname)"' % ud.basecmd,
+ d, workdir=dest).splitlines()
+ for r in ud.shallow_extra_refs:
+ if not ud.bareclone:
+ r = r.replace('refs/heads/', 'refs/remotes/origin/')
+
+ if '*' in r:
+ matches = filter(lambda a: fnmatch.fnmatchcase(a, r), all_refs)
+ shallow_branches.extend(matches)
+ else:
+ shallow_branches.append(r)
+
+ # Make the repository shallow
+ shallow_cmd = ['git', 'make-shallow', '-s']
+ for b in shallow_branches:
+ shallow_cmd.append('-r')
+ shallow_cmd.append(b)
+ shallow_cmd.extend(shallow_revisions)
+ runfetchcmd(subprocess.list2cmdline(shallow_cmd), d, workdir=dest)
+
def unpack(self, ud, destdir, d):
""" unpack the downloaded src to destdir"""
@@ -310,11 +468,12 @@
if os.path.exists(destdir):
bb.utils.prunedir(destdir)
- cloneflags = "-s -n"
- if ud.bareclone:
- cloneflags += " --mirror"
+ if ud.shallow and (not os.path.exists(ud.clonedir) or self.need_update(ud, d)):
+ bb.utils.mkdirhier(destdir)
+ runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir)
+ else:
+ runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
- runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, ud.clonedir, destdir), d)
repourl = self._get_repo_url(ud)
runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
if not ud.nocheckout:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py
index c66c211..a9b69ca 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py
@@ -33,6 +33,11 @@
"""
return ud.type in ['gitannex']
+ def urldata_init(self, ud, d):
+ super(GitANNEX, self).urldata_init(ud, d)
+ if ud.shallow:
+ ud.shallow_extra_refs += ['refs/heads/git-annex', 'refs/heads/synced/*']
+
def uses_annex(self, ud, d, wd):
for name in ud.names:
try:
@@ -55,9 +60,21 @@
def download(self, ud, d):
Git.download(self, ud, d)
- annex = self.uses_annex(ud, d, ud.clonedir)
- if annex:
- self.update_annex(ud, d, ud.clonedir)
+ if not ud.shallow or ud.localpath != ud.fullshallow:
+ if self.uses_annex(ud, d, ud.clonedir):
+ self.update_annex(ud, d, ud.clonedir)
+
+ def clone_shallow_local(self, ud, dest, d):
+ super(GitANNEX, self).clone_shallow_local(ud, dest, d)
+
+ try:
+ runfetchcmd("%s annex init" % ud.basecmd, d, workdir=dest)
+ except bb.fetch.FetchError:
+ pass
+
+ if self.uses_annex(ud, d, dest):
+ runfetchcmd("%s annex get" % ud.basecmd, d, workdir=dest)
+ runfetchcmd("chmod u+w -R %s/.git/annex" % (dest), d, quiet=True, workdir=dest)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py
index a95584c..0aff100 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py
@@ -117,14 +117,19 @@
def download(self, ud, d):
Git.download(self, ud, d)
- submodules = self.uses_submodules(ud, d, ud.clonedir)
- if submodules:
- self.update_submodules(ud, d)
+ if not ud.shallow or ud.localpath != ud.fullshallow:
+ submodules = self.uses_submodules(ud, d, ud.clonedir)
+ if submodules:
+ self.update_submodules(ud, d)
+
+ def clone_shallow_local(self, ud, dest, d):
+ super(GitSM, self).clone_shallow_local(ud, dest, d)
+
+ runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir, os.path.join(dest, '.git')), d)
def unpack(self, ud, destdir, d):
Git.unpack(self, ud, destdir, d)
-
- submodules = self.uses_submodules(ud, d, ud.destdir)
- if submodules:
+
+ if self.uses_submodules(ud, d, ud.destdir):
runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py
index b5f2686..d0857e6 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py
@@ -76,8 +76,9 @@
# Create paths to mercurial checkouts
hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
ud.host, ud.path.replace('/', '.'))
- ud.mirrortarball = 'hg_%s.tar.gz' % hgsrcname
- ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
+ mirrortarball = 'hg_%s.tar.gz' % hgsrcname
+ ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
+ ud.mirrortarballs = [mirrortarball]
hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/")
ud.pkgdir = os.path.join(hgdir, hgsrcname)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py
index 73a75fe..b5f148c 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py
@@ -91,9 +91,10 @@
ud.prefixdir = prefixdir
ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0")
- ud.mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
- ud.mirrortarball = ud.mirrortarball.replace('/', '-')
- ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
+ mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
+ mirrortarball = mirrortarball.replace('/', '-')
+ ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
+ ud.mirrortarballs = [mirrortarball]
def need_update(self, ud, d):
if os.path.exists(ud.localpath):
@@ -262,26 +263,27 @@
runfetchcmd("tar -xJf %s" % (ud.fullmirror), d, workdir=dest)
return
- shwrf = d.getVar('NPM_SHRINKWRAP')
- logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
- if shwrf:
- try:
- with open(shwrf) as datafile:
- shrinkobj = json.load(datafile)
- except Exception as e:
- raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
- elif not ud.ignore_checksums:
- logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
- lckdf = d.getVar('NPM_LOCKDOWN')
- logger.debug(2, "NPM lockdown file is %s" % lckdf)
- if lckdf:
- try:
- with open(lckdf) as datafile:
- lockdown = json.load(datafile)
- except Exception as e:
- raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
- elif not ud.ignore_checksums:
- logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
+ if ud.parm.get("noverify", None) != '1':
+ shwrf = d.getVar('NPM_SHRINKWRAP')
+ logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
+ if shwrf:
+ try:
+ with open(shwrf) as datafile:
+ shrinkobj = json.load(datafile)
+ except Exception as e:
+ raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
+ elif not ud.ignore_checksums:
+ logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
+ lckdf = d.getVar('NPM_LOCKDOWN')
+ logger.debug(2, "NPM lockdown file is %s" % lckdf)
+ if lckdf:
+ try:
+ with open(lckdf) as datafile:
+ lockdown = json.load(datafile)
+ except Exception as e:
+ raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
+ elif not ud.ignore_checksums:
+ logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
if ('name' not in shrinkobj):
self._getdependencies(ud.pkgname, jsondepobj, ud.version, d, ud)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py
index 1be91cc..c22d9b5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py
@@ -27,6 +27,7 @@
import bb
from bb.fetch2 import FetchMethod
from bb.fetch2 import runfetchcmd
+from bb.fetch2 import logger
class Repo(FetchMethod):
"""Class to fetch a module or modules from repo (git) repositories"""
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py
index ae0ffa8..7c49c2b 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py
@@ -30,6 +30,7 @@
import subprocess
import os
import logging
+import errno
import bb
import bb.progress
import urllib.request, urllib.parse, urllib.error
@@ -89,13 +90,13 @@
self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
- def _runwget(self, ud, d, command, quiet):
+ def _runwget(self, ud, d, command, quiet, workdir=None):
progresshandler = WgetProgressHandler(d)
logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
bb.fetch2.check_network_access(d, command, ud.url)
- runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler)
+ runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
def download(self, ud, d):
"""Fetch urls"""
@@ -206,8 +207,21 @@
h.request(req.get_method(), req.selector, req.data, headers)
except socket.error as err: # XXX what error?
# Don't close connection when cache is enabled.
+ # Instead, try to detect connections that are no longer
+ # usable (for example, closed unexpectedly) and remove
+ # them from the cache.
if fetch.connection_cache is None:
h.close()
+ elif isinstance(err, OSError) and err.errno == errno.EBADF:
+ # This happens when the server closes the connection despite the Keep-Alive.
+ # Apparently urllib then uses the file descriptor, expecting it to be
+ # connected, when in reality the connection is already gone.
+ # We let the request fail and expect it to be
+ # tried once more ("try_again" in check_status()),
+ # with the dead connection removed from the cache.
+ # If it still fails, we give up, which can happend for bad
+ # HTTP proxy settings.
+ fetch.connection_cache.remove_connection(h.host, h.port)
raise urllib.error.URLError(err)
else:
try:
@@ -269,11 +283,6 @@
"""
http_error_403 = http_error_405
- """
- Some servers (e.g. FusionForge) returns 406 Not Acceptable when they
- actually mean 405 Method Not Allowed.
- """
- http_error_406 = http_error_405
class FixedHTTPRedirectHandler(urllib.request.HTTPRedirectHandler):
"""
@@ -302,7 +311,9 @@
uri = ud.url.split(";")[0]
r = urllib.request.Request(uri)
r.get_method = lambda: "HEAD"
-
+ # Some servers (FusionForge, as used on Alioth) require that the
+ # optional Accept header is set.
+ r.add_header("Accept", "*/*")
def add_basic_auth(login_str, request):
'''Adds Basic auth to http request, pass in login:password as string'''
import base64
@@ -408,17 +419,16 @@
Run fetch checkstatus to get directory information
"""
f = tempfile.NamedTemporaryFile()
+ with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
+ agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
+ fetchcmd = self.basecmd
+ fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
+ try:
+ self._runwget(ud, d, fetchcmd, True, workdir=workdir)
+ fetchresult = f.read()
+ except bb.fetch2.BBFetchException:
+ fetchresult = ""
- agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
- fetchcmd = self.basecmd
- fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
- try:
- self._runwget(ud, d, fetchcmd, True)
- fetchresult = f.read()
- except bb.fetch2.BBFetchException:
- fetchresult = ""
-
- f.close()
return fetchresult
def _check_latest_version(self, url, package, package_regex, current_version, ud, d):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/main.py b/import-layers/yocto-poky/bitbake/lib/bb/main.py
index 8c948c2..7711b29 100755
--- a/import-layers/yocto-poky/bitbake/lib/bb/main.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/main.py
@@ -28,6 +28,8 @@
import optparse
import warnings
import fcntl
+import time
+import traceback
import bb
from bb import event
@@ -37,11 +39,17 @@
from bb import server
from bb import cookerdata
+import bb.server.process
+import bb.server.xmlrpcclient
+
logger = logging.getLogger("BitBake")
class BBMainException(Exception):
pass
+class BBMainFatal(bb.BBHandledException):
+ pass
+
def present_options(optionlist):
if len(optionlist) > 1:
return ' or '.join([', '.join(optionlist[:-1]), optionlist[-1]])
@@ -58,9 +66,6 @@
if option.dest == 'ui':
valid_uis = list_extension_modules(bb.ui, 'main')
option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
- elif option.dest == 'servertype':
- valid_server_types = list_extension_modules(bb.server, 'BitBakeServer')
- option.help = option.help.replace('@CHOICES@', present_options(valid_server_types))
return optparse.IndentedHelpFormatter.format_option(self, option)
@@ -148,11 +153,6 @@
"failed and anything depending on it cannot be built, as much as "
"possible will be built before stopping.")
- parser.add_option("-a", "--tryaltconfigs", action="store_true",
- dest="tryaltconfigs", default=False,
- help="Continue with builds by trying to use alternative providers "
- "where possible.")
-
parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
help="Force the specified targets/task to run (invalidating any "
"existing stamp file).")
@@ -238,11 +238,6 @@
default=os.environ.get('BITBAKE_UI', 'knotty'),
help="The user interface to use (@CHOICES@ - default %default).")
- # @CHOICES@ is substituted out by BitbakeHelpFormatter above
- parser.add_option("-t", "--servertype", action="store", dest="servertype",
- default=["process", "xmlrpc"]["BBSERVER" in os.environ],
- help="Choose which server type to use (@CHOICES@ - default %default).")
-
parser.add_option("", "--token", action="store", dest="xmlrpctoken",
default=os.environ.get("BBTOKEN"),
help="Specify the connection token to be used when connecting "
@@ -258,15 +253,14 @@
help="Run bitbake without a UI, only starting a server "
"(cooker) process.")
- parser.add_option("", "--foreground", action="store_true",
- help="Run bitbake server in foreground.")
-
parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
- help="The name/address for the bitbake server to bind to.")
+ help="The name/address for the bitbake xmlrpc server to bind to.")
- parser.add_option("-T", "--idle-timeout", type=int,
- default=int(os.environ.get("BBTIMEOUT", "0")),
- help="Set timeout to unload bitbake server due to inactivity")
+ parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
+ default=os.getenv("BB_SERVER_TIMEOUT"),
+ help="Set timeout to unload bitbake server due to inactivity, "
+ "set to -1 means no unload, "
+ "default: Environment variable BB_SERVER_TIMEOUT.")
parser.add_option("", "--no-setscene", action="store_true",
dest="nosetscene", default=False,
@@ -283,7 +277,7 @@
parser.add_option("-m", "--kill-server", action="store_true",
dest="kill_server", default=False,
- help="Terminate the remote server.")
+ help="Terminate any running bitbake server.")
parser.add_option("", "--observe-only", action="store_true",
dest="observe_only", default=False,
@@ -322,70 +316,20 @@
eventlog = "bitbake_eventlog_%s.json" % datetime.now().strftime("%Y%m%d%H%M%S")
options.writeeventlog = eventlog
- # if BBSERVER says to autodetect, let's do that
- if options.remote_server:
- port = -1
- if options.remote_server != 'autostart':
- host, port = options.remote_server.split(":", 2)
+ if options.bind:
+ try:
+ #Checking that the port is a number and is a ':' delimited value
+ (host, port) = options.bind.split(':')
port = int(port)
- # use automatic port if port set to -1, means read it from
- # the bitbake.lock file; this is a bit tricky, but we always expect
- # to be in the base of the build directory if we need to have a
- # chance to start the server later, anyway
- if port == -1:
- lock_location = "./bitbake.lock"
- # we try to read the address at all times; if the server is not started,
- # we'll try to start it after the first connect fails, below
- try:
- lf = open(lock_location, 'r')
- remotedef = lf.readline()
- [host, port] = remotedef.split(":")
- port = int(port)
- lf.close()
- options.remote_server = remotedef
- except Exception as e:
- if options.remote_server != 'autostart':
- raise BBMainException("Failed to read bitbake.lock (%s), invalid port" % str(e))
+ except (ValueError,IndexError):
+ raise BBMainException("FATAL: Malformed host:port bind parameter")
+ options.xmlrpcinterface = (host, port)
+ else:
+ options.xmlrpcinterface = (None, 0)
return options, targets[1:]
-def start_server(servermodule, configParams, configuration, features):
- server = servermodule.BitBakeServer()
- single_use = not configParams.server_only and os.getenv('BBSERVER') != 'autostart'
- if configParams.bind:
- (host, port) = configParams.bind.split(':')
- server.initServer((host, int(port)), single_use=single_use,
- idle_timeout=configParams.idle_timeout)
- configuration.interface = [server.serverImpl.host, server.serverImpl.port]
- else:
- server.initServer(single_use=single_use)
- configuration.interface = []
-
- try:
- configuration.setServerRegIdleCallback(server.getServerIdleCB())
-
- cooker = bb.cooker.BBCooker(configuration, features)
-
- server.addcooker(cooker)
- server.saveConnectionDetails()
- except Exception as e:
- while hasattr(server, "event_queue"):
- import queue
- try:
- event = server.event_queue.get(block=False)
- except (queue.Empty, IOError):
- break
- if isinstance(event, logging.LogRecord):
- logger.handle(event)
- raise
- if not configParams.foreground:
- server.detach()
- cooker.shutdown()
- cooker.lock.close()
- return server
-
-
def bitbake_main(configParams, configuration):
# Python multiprocessing requires /dev/shm on Linux
@@ -406,45 +350,15 @@
configuration.setConfigParameters(configParams)
- if configParams.server_only:
- if configParams.servertype != "xmlrpc":
- raise BBMainException("FATAL: If '--server-only' is defined, we must set the "
- "servertype as 'xmlrpc'.\n")
- if not configParams.bind:
- raise BBMainException("FATAL: The '--server-only' option requires a name/address "
- "to bind to with the -B option.\n")
- else:
- try:
- #Checking that the port is a number
- int(configParams.bind.split(":")[1])
- except (ValueError,IndexError):
- raise BBMainException(
- "FATAL: Malformed host:port bind parameter")
- if configParams.remote_server:
+ if configParams.server_only and configParams.remote_server:
raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
("the BBSERVER environment variable" if "BBSERVER" in os.environ \
else "the '--remote-server' option"))
- elif configParams.foreground:
- raise BBMainException("FATAL: The '--foreground' option can only be used "
- "with --server-only.\n")
-
- if configParams.bind and configParams.servertype != "xmlrpc":
- raise BBMainException("FATAL: If '-B' or '--bind' is defined, we must "
- "set the servertype as 'xmlrpc'.\n")
-
- if configParams.remote_server and configParams.servertype != "xmlrpc":
- raise BBMainException("FATAL: If '--remote-server' is defined, we must "
- "set the servertype as 'xmlrpc'.\n")
-
- if configParams.observe_only and (not configParams.remote_server or configParams.bind):
+ if configParams.observe_only and not (configParams.remote_server or configParams.bind):
raise BBMainException("FATAL: '--observe-only' can only be used by UI clients "
"connecting to a server.\n")
- if configParams.kill_server and not configParams.remote_server:
- raise BBMainException("FATAL: '--kill-server' can only be used to "
- "terminate a remote server")
-
if "BBDEBUG" in os.environ:
level = int(os.environ["BBDEBUG"])
if level > configuration.debug:
@@ -453,9 +367,13 @@
bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
configuration.debug_domains)
- server, server_connection, ui_module = setup_bitbake(configParams, configuration)
- if server_connection is None and configParams.kill_server:
- return 0
+ server_connection, ui_module = setup_bitbake(configParams, configuration)
+ # No server connection
+ if server_connection is None:
+ if configParams.status_only:
+ return 1
+ if configParams.kill_server:
+ return 0
if not configParams.server_only:
if configParams.status_only:
@@ -463,16 +381,15 @@
return 0
try:
+ for event in bb.event.ui_queue:
+ server_connection.events.queue_event(event)
+ bb.event.ui_queue = []
+
return ui_module.main(server_connection.connection, server_connection.events,
configParams)
finally:
- bb.event.ui_queue = []
server_connection.terminate()
else:
- print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host,
- server.serverImpl.port))
- if configParams.foreground:
- server.serverImpl.serve_forever()
return 0
return 1
@@ -495,58 +412,93 @@
# Collect the feature set for the UI
featureset = getattr(ui_module, "featureSet", [])
- if configParams.server_only:
- for param in ('prefile', 'postfile'):
- value = getattr(configParams, param)
- if value:
- setattr(configuration, "%s_server" % param, value)
- param = "%s_server" % param
-
if extrafeatures:
for feature in extrafeatures:
if not feature in featureset:
featureset.append(feature)
- servermodule = import_extension_module(bb.server,
- configParams.servertype,
- 'BitBakeServer')
+ server_connection = None
+
if configParams.remote_server:
- if os.getenv('BBSERVER') == 'autostart':
- if configParams.remote_server == 'autostart' or \
- not servermodule.check_connection(configParams.remote_server, timeout=2):
- configParams.bind = 'localhost:0'
- srv = start_server(servermodule, configParams, configuration, featureset)
- configParams.remote_server = '%s:%d' % tuple(configuration.interface)
- bb.event.ui_queue = []
- # we start a stub server that is actually a XMLRPClient that connects to a real server
- from bb.server.xmlrpc import BitBakeXMLRPCClient
- server = servermodule.BitBakeXMLRPCClient(configParams.observe_only,
- configParams.xmlrpctoken)
- server.saveConnectionDetails(configParams.remote_server)
+ # Connect to a remote XMLRPC server
+ server_connection = bb.server.xmlrpcclient.connectXMLRPC(configParams.remote_server, featureset,
+ configParams.observe_only, configParams.xmlrpctoken)
else:
- # we start a server with a given configuration
- server = start_server(servermodule, configParams, configuration, featureset)
+ retries = 8
+ while retries:
+ try:
+ topdir, lock = lockBitbake()
+ sockname = topdir + "/bitbake.sock"
+ if lock:
+ if configParams.status_only or configParams.kill_server:
+ logger.info("bitbake server is not running.")
+ lock.close()
+ return None, None
+ # we start a server with a given configuration
+ logger.info("Starting bitbake server...")
+ # Clear the event queue since we already displayed messages
+ bb.event.ui_queue = []
+ server = bb.server.process.BitBakeServer(lock, sockname, configuration, featureset)
+
+ else:
+ logger.info("Reconnecting to bitbake server...")
+ if not os.path.exists(sockname):
+ print("Previous bitbake instance shutting down?, waiting to retry...")
+ i = 0
+ lock = None
+ # Wait for 5s or until we can get the lock
+ while not lock and i < 50:
+ time.sleep(0.1)
+ _, lock = lockBitbake()
+ i += 1
+ if lock:
+ bb.utils.unlockfile(lock)
+ raise bb.server.process.ProcessTimeout("Bitbake still shutting down as socket exists but no lock?")
+ if not configParams.server_only:
+ try:
+ server_connection = bb.server.process.connectProcessServer(sockname, featureset)
+ except EOFError:
+ # The server may have been shutting down but not closed the socket yet. If that happened,
+ # ignore it.
+ pass
+
+ if server_connection or configParams.server_only:
+ break
+ except BBMainFatal:
+ raise
+ except (Exception, bb.server.process.ProcessTimeout) as e:
+ if not retries:
+ raise
+ retries -= 1
+ if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError)):
+ logger.info("Retrying server connection...")
+ else:
+ logger.info("Retrying server connection... (%s)" % traceback.format_exc())
+ if not retries:
+ bb.fatal("Unable to connect to bitbake server, or start one")
+ if retries < 5:
+ time.sleep(5)
+
+ if configParams.kill_server:
+ server_connection.connection.terminateServer()
+ server_connection.terminate()
bb.event.ui_queue = []
+ logger.info("Terminated bitbake server.")
+ return None, None
- if configParams.server_only:
- server_connection = None
- else:
- try:
- server_connection = server.establishConnection(featureset)
- except Exception as e:
- bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
+ # Restore the environment in case the UI needs it
+ for k in cleanedvars:
+ os.environ[k] = cleanedvars[k]
- if configParams.kill_server:
- server_connection.connection.terminateServer()
- bb.event.ui_queue = []
- return None, None, None
+ logger.removeHandler(handler)
- server_connection.setupEventQueue()
+ return server_connection, ui_module
- # Restore the environment in case the UI needs it
- for k in cleanedvars:
- os.environ[k] = cleanedvars[k]
+def lockBitbake():
+ topdir = bb.cookerdata.findTopdir()
+ if not topdir:
+ bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?")
+ raise BBMainFatal
+ lockfile = topdir + "/bitbake.lock"
+ return topdir, bb.utils.lockfile(lockfile, False, False)
- logger.removeHandler(handler)
-
- return server, server_connection, ui_module
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/msg.py b/import-layers/yocto-poky/bitbake/lib/bb/msg.py
index 90b1582..f1723be 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/msg.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/msg.py
@@ -216,3 +216,10 @@
logger.handlers = [console]
logger.setLevel(level)
return logger
+
+def has_console_handler(logger):
+ for handler in logger.handlers:
+ if isinstance(handler, logging.StreamHandler):
+ if handler.stream in [sys.stderr, sys.stdout]:
+ return True
+ return False
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py
index a2952ec..2fc4002 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py
@@ -84,6 +84,10 @@
logger.debug(1, "Updating mtime cache for %s" % f)
update_mtime(f)
+def clear_cache():
+ global __mtime_cache
+ __mtime_cache = {}
+
def mark_dependency(d, f):
if f.startswith('./'):
f = "%s/%s" % (os.getcwd(), f[2:])
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
index fe918a4..f89ad24 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
@@ -144,7 +144,7 @@
try:
statements.eval(d)
except bb.parse.SkipRecipe:
- bb.data.setVar("__SKIPPED", True, d)
+ d.setVar("__SKIPPED", True)
if include == 0:
return { "" : d }
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
index f7d0cf7..97aa130 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
@@ -32,7 +32,7 @@
__config_regexp__ = re.compile( r"""
^
- (?P<exp>export\s*)?
+ (?P<exp>export\s+)?
(?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
(\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
@@ -69,18 +69,26 @@
def supports(fn, d):
return fn[-5:] == ".conf"
-def include(parentfn, fn, lineno, data, error_out):
+def include(parentfn, fns, lineno, data, error_out):
"""
error_out: A string indicating the verb (e.g. "include", "inherit") to be
used in a ParseError that will be raised if the file to be included could
not be included. Specify False to avoid raising an error in this case.
"""
+ fns = data.expand(fns)
+ parentfn = data.expand(parentfn)
+
+ # "include" or "require" accept zero to n space-separated file names to include.
+ for fn in fns.split():
+ include_single_file(parentfn, fn, lineno, data, error_out)
+
+def include_single_file(parentfn, fn, lineno, data, error_out):
+ """
+ Helper function for include() which does not expand or split its parameters.
+ """
if parentfn == fn: # prevent infinite recursion
return None
- fn = data.expand(fn)
- parentfn = data.expand(parentfn)
-
if not os.path.isabs(fn):
dname = os.path.dirname(parentfn)
bbpath = "%s:%s" % (dname, data.getVar("BBPATH"))
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/process.py b/import-layers/yocto-poky/bitbake/lib/bb/process.py
index a4a5599..e69697c 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/process.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/process.py
@@ -94,46 +94,53 @@
if data is not None:
func(data)
+ def read_all_pipes(log, rin, outdata, errdata):
+ rlist = rin
+ stdoutbuf = b""
+ stderrbuf = b""
+
+ try:
+ r,w,e = select.select (rlist, [], [], 1)
+ except OSError as e:
+ if e.errno != errno.EINTR:
+ raise
+
+ readextras(r)
+
+ if pipe.stdout in r:
+ data = stdoutbuf + pipe.stdout.read()
+ if data is not None and len(data) > 0:
+ try:
+ data = data.decode("utf-8")
+ outdata.append(data)
+ log.write(data)
+ log.flush()
+ stdoutbuf = b""
+ except UnicodeDecodeError:
+ stdoutbuf = data
+
+ if pipe.stderr in r:
+ data = stderrbuf + pipe.stderr.read()
+ if data is not None and len(data) > 0:
+ try:
+ data = data.decode("utf-8")
+ errdata.append(data)
+ log.write(data)
+ log.flush()
+ stderrbuf = b""
+ except UnicodeDecodeError:
+ stderrbuf = data
+
try:
+ # Read all pipes while the process is open
while pipe.poll() is None:
- rlist = rin
- stdoutbuf = b""
- stderrbuf = b""
- try:
- r,w,e = select.select (rlist, [], [], 1)
- except OSError as e:
- if e.errno != errno.EINTR:
- raise
+ read_all_pipes(log, rin, outdata, errdata)
- if pipe.stdout in r:
- data = stdoutbuf + pipe.stdout.read()
- if data is not None and len(data) > 0:
- try:
- data = data.decode("utf-8")
- outdata.append(data)
- log.write(data)
- stdoutbuf = b""
- except UnicodeDecodeError:
- stdoutbuf = data
-
- if pipe.stderr in r:
- data = stderrbuf + pipe.stderr.read()
- if data is not None and len(data) > 0:
- try:
- data = data.decode("utf-8")
- errdata.append(data)
- log.write(data)
- stderrbuf = b""
- except UnicodeDecodeError:
- stderrbuf = data
-
- readextras(r)
-
- finally:
+ # Pocess closed, drain all pipes...
+ read_all_pipes(log, rin, outdata, errdata)
+ finally:
log.flush()
- readextras([fobj for fobj, _ in extrafiles])
-
if pipe.stdout is not None:
pipe.stdout.close()
if pipe.stderr is not None:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py b/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py
index 7d2ff81..ae12c25 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py
@@ -1355,12 +1355,7 @@
logger.info("Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and all succeeded.", self.rqexe.stats.completed, self.rqexe.stats.skipped)
if self.state is runQueueFailed:
- if not self.rqdata.taskData[''].tryaltconfigs:
- raise bb.runqueue.TaskFailure(self.rqexe.failed_tids)
- for tid in self.rqexe.failed_tids:
- (mc, fn, tn, _) = split_tid_mcfn(tid)
- self.rqdata.taskData[mc].fail_fn(fn)
- self.rqdata.reset()
+ raise bb.runqueue.TaskFailure(self.rqexe.failed_tids)
if self.state is runQueueComplete:
# All done
@@ -1839,7 +1834,7 @@
Run the tasks in a queue prepared by rqdata.prepare()
"""
- if self.rqdata.setscenewhitelist and not self.rqdata.setscenewhitelist_checked:
+ if self.rqdata.setscenewhitelist is not None and not self.rqdata.setscenewhitelist_checked:
self.rqdata.setscenewhitelist_checked = True
# Check tasks that are going to run against the whitelist
@@ -1932,7 +1927,7 @@
self.rq.state = runQueueFailed
self.stats.taskFailed()
return True
- self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, False, self.cooker.collection.get_file_appends(fn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
+ self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, False, self.cooker.collection.get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
self.rq.fakeworker[mc].process.stdin.flush()
else:
self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, False, self.cooker.collection.get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
@@ -2254,7 +2249,7 @@
self.scenequeue_updatecounters(task)
def check_taskfail(self, task):
- if self.rqdata.setscenewhitelist:
+ if self.rqdata.setscenewhitelist is not None:
realtask = task.split('_setscene')[0]
(mc, fn, taskname, taskfn) = split_tid_mcfn(realtask)
pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
@@ -2372,7 +2367,7 @@
self.rq.scenequeue_covered = self.scenequeue_covered
self.rq.scenequeue_notcovered = self.scenequeue_notcovered
- logger.debug(1, 'We can skip tasks %s', sorted(self.rq.scenequeue_covered))
+ logger.debug(1, 'We can skip tasks %s', "\n".join(sorted(self.rq.scenequeue_covered)))
self.rq.state = runQueueRunInit
@@ -2488,6 +2483,9 @@
runQueueEvent.__init__(self, task, stats, rq)
self.exitcode = exitcode
+ def __str__(self):
+ return "Task (%s) failed with exit code '%s'" % (self.taskstring, self.exitcode)
+
class sceneQueueTaskFailed(sceneQueueEvent):
"""
Event notifying a setscene task failed
@@ -2496,6 +2494,9 @@
sceneQueueEvent.__init__(self, task, stats, rq)
self.exitcode = exitcode
+ def __str__(self):
+ return "Setscene task (%s) failed with exit code '%s' - real task will be run instead" % (self.taskstring, self.exitcode)
+
class sceneQueueComplete(sceneQueueEvent):
"""
Event when all the sceneQueue tasks are complete
@@ -2602,7 +2603,7 @@
def check_setscene_enforce_whitelist(pn, taskname, whitelist):
import fnmatch
- if whitelist:
+ if whitelist is not None:
item = '%s:%s' % (pn, taskname)
for whitelist_item in whitelist:
if fnmatch.fnmatch(item, whitelist_item):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py
index 538a633..5a3fba9 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py
@@ -18,82 +18,4 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-""" Base code for Bitbake server process
-Have a common base for that all Bitbake server classes ensures a consistent
-approach to the interface, and minimize risks associated with code duplication.
-
-"""
-
-""" BaseImplServer() the base class for all XXServer() implementations.
-
- These classes contain the actual code that runs the server side, i.e.
- listens for the commands and executes them. Although these implementations
- contain all the data of the original bitbake command, i.e the cooker instance,
- they may well run on a different process or even machine.
-
-"""
-
-class BaseImplServer():
- def __init__(self):
- self._idlefuns = {}
-
- def addcooker(self, cooker):
- self.cooker = cooker
-
- def register_idle_function(self, function, data):
- """Register a function to be called while the server is idle"""
- assert hasattr(function, '__call__')
- self._idlefuns[function] = data
-
-
-
-""" BitBakeBaseServerConnection class is the common ancestor to all
- BitBakeServerConnection classes.
-
- These classes control the remote server. The only command currently
- implemented is the terminate() command.
-
-"""
-
-class BitBakeBaseServerConnection():
- def __init__(self, serverImpl):
- pass
-
- def terminate(self):
- pass
-
- def setupEventQueue(self):
- pass
-
-
-""" BitBakeBaseServer class is the common ancestor to all Bitbake servers
-
- Derive this class in order to implement a BitBakeServer which is the
- controlling stub for the actual server implementation
-
-"""
-class BitBakeBaseServer(object):
- def initServer(self):
- self.serverImpl = None # we ensure a runtime crash if not overloaded
- self.connection = None
- return
-
- def addcooker(self, cooker):
- self.cooker = cooker
- self.serverImpl.addcooker(cooker)
-
- def getServerIdleCB(self):
- return self.serverImpl.register_idle_function
-
- def saveConnectionDetails(self):
- return
-
- def detach(self):
- return
-
- def establishConnection(self, featureset):
- raise "Must redefine the %s.establishConnection()" % self.__class__.__name__
-
- def endSession(self):
- self.connection.terminate()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/process.py b/import-layers/yocto-poky/bitbake/lib/bb/server/process.py
index c3c1450..3d31355 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/server/process.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/process.py
@@ -22,125 +22,245 @@
import bb
import bb.event
-import itertools
import logging
import multiprocessing
+import threading
+import array
import os
-import signal
import sys
import time
import select
-from queue import Empty
-from multiprocessing import Event, Process, util, Queue, Pipe, queues, Manager
-
-from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
+import socket
+import subprocess
+import errno
+import re
+import datetime
+import bb.server.xmlrpcserver
+from bb import daemonize
+from multiprocessing import queues
logger = logging.getLogger('BitBake')
-class ServerCommunicator():
- def __init__(self, connection, event_handle, server):
- self.connection = connection
- self.event_handle = event_handle
- self.server = server
+class ProcessTimeout(SystemExit):
+ pass
- def runCommand(self, command):
- # @todo try/except
- self.connection.send(command)
-
- if not self.server.is_alive():
- raise SystemExit
-
- while True:
- # don't let the user ctrl-c while we're waiting for a response
- try:
- for idx in range(0,4): # 0, 1, 2, 3
- if self.connection.poll(5):
- return self.connection.recv()
- else:
- bb.warn("Timeout while attempting to communicate with bitbake server")
- bb.fatal("Gave up; Too many tries: timeout while attempting to communicate with bitbake server")
- except KeyboardInterrupt:
- pass
-
- def getEventHandle(self):
- return self.event_handle.value
-
-class EventAdapter():
- """
- Adapter to wrap our event queue since the caller (bb.event) expects to
- call a send() method, but our actual queue only has put()
- """
- def __init__(self, queue):
- self.queue = queue
-
- def send(self, event):
- try:
- self.queue.put(event)
- except Exception as err:
- print("EventAdapter puked: %s" % str(err))
-
-
-class ProcessServer(Process, BaseImplServer):
+class ProcessServer(multiprocessing.Process):
profile_filename = "profile.log"
profile_processed_filename = "profile.log.processed"
- def __init__(self, command_channel, event_queue, featurelist):
- BaseImplServer.__init__(self)
- Process.__init__(self)
- self.command_channel = command_channel
- self.event_queue = event_queue
- self.event = EventAdapter(event_queue)
- self.featurelist = featurelist
+ def __init__(self, lock, sock, sockname):
+ multiprocessing.Process.__init__(self)
+ self.command_channel = False
+ self.command_channel_reply = False
self.quit = False
self.heartbeat_seconds = 1 # default, BB_HEARTBEAT_EVENT will be checked once we have a datastore.
self.next_heartbeat = time.time()
- self.quitin, self.quitout = Pipe()
- self.event_handle = multiprocessing.Value("i")
+ self.event_handle = None
+ self.haveui = False
+ self.lastui = False
+ self.xmlrpc = False
+
+ self._idlefuns = {}
+
+ self.bitbake_lock = lock
+ self.sock = sock
+ self.sockname = sockname
+
+ def register_idle_function(self, function, data):
+ """Register a function to be called while the server is idle"""
+ assert hasattr(function, '__call__')
+ self._idlefuns[function] = data
def run(self):
- for event in bb.event.ui_queue:
- self.event_queue.put(event)
- self.event_handle.value = bb.event.register_UIHhandler(self, True)
+
+ if self.xmlrpcinterface[0]:
+ self.xmlrpc = bb.server.xmlrpcserver.BitBakeXMLRPCServer(self.xmlrpcinterface, self.cooker, self)
+
+ print("Bitbake XMLRPC server address: %s, server port: %s" % (self.xmlrpc.host, self.xmlrpc.port))
heartbeat_event = self.cooker.data.getVar('BB_HEARTBEAT_EVENT')
if heartbeat_event:
try:
self.heartbeat_seconds = float(heartbeat_event)
except:
- # Throwing an exception here causes bitbake to hang.
- # Just warn about the invalid setting and continue
bb.warn('Ignoring invalid BB_HEARTBEAT_EVENT=%s, must be a float specifying seconds.' % heartbeat_event)
- bb.cooker.server_main(self.cooker, self.main)
+
+ self.timeout = self.server_timeout or self.cooker.data.getVar('BB_SERVER_TIMEOUT')
+ try:
+ if self.timeout:
+ self.timeout = float(self.timeout)
+ except:
+ bb.warn('Ignoring invalid BB_SERVER_TIMEOUT=%s, must be a float specifying seconds.' % self.timeout)
+
+
+ try:
+ self.bitbake_lock.seek(0)
+ self.bitbake_lock.truncate()
+ if self.xmlrpc:
+ self.bitbake_lock.write("%s %s:%s\n" % (os.getpid(), self.xmlrpc.host, self.xmlrpc.port))
+ else:
+ self.bitbake_lock.write("%s\n" % (os.getpid()))
+ self.bitbake_lock.flush()
+ except Exception as e:
+ print("Error writing to lock file: %s" % str(e))
+ pass
+
+ if self.cooker.configuration.profile:
+ try:
+ import cProfile as profile
+ except:
+ import profile
+ prof = profile.Profile()
+
+ ret = profile.Profile.runcall(prof, self.main)
+
+ prof.dump_stats("profile.log")
+ bb.utils.process_profilelog("profile.log")
+ print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
+
+ else:
+ ret = self.main()
+
+ return ret
def main(self):
- # Ignore SIGINT within the server, as all SIGINT handling is done by
- # the UI and communicated to us
- self.quitin.close()
- signal.signal(signal.SIGINT, signal.SIG_IGN)
+ self.cooker.pre_serve()
+
bb.utils.set_process_name("Cooker")
+
+ ready = []
+
+ self.controllersock = False
+ fds = [self.sock]
+ if self.xmlrpc:
+ fds.append(self.xmlrpc)
+ print("Entering server connection loop")
+
+ def disconnect_client(self, fds):
+ if not self.haveui:
+ return
+ print("Disconnecting Client")
+ fds.remove(self.controllersock)
+ fds.remove(self.command_channel)
+ bb.event.unregister_UIHhandler(self.event_handle, True)
+ self.command_channel_reply.writer.close()
+ self.event_writer.writer.close()
+ del self.event_writer
+ self.controllersock.close()
+ self.controllersock = False
+ self.haveui = False
+ self.lastui = time.time()
+ self.cooker.clientComplete()
+ if self.timeout is None:
+ print("No timeout, exiting.")
+ self.quit = True
+
while not self.quit:
- try:
- if self.command_channel.poll():
- command = self.command_channel.recv()
- self.runCommand(command)
- if self.quitout.poll():
- self.quitout.recv()
+ if self.sock in ready:
+ self.controllersock, address = self.sock.accept()
+ if self.haveui:
+ print("Dropping connection attempt as we have a UI %s" % (str(ready)))
+ self.controllersock.close()
+ else:
+ print("Accepting %s" % (str(ready)))
+ fds.append(self.controllersock)
+ if self.controllersock in ready:
+ try:
+ print("Connecting Client")
+ ui_fds = recvfds(self.controllersock, 3)
+
+ # Where to write events to
+ writer = ConnectionWriter(ui_fds[0])
+ self.event_handle = bb.event.register_UIHhandler(writer, True)
+ self.event_writer = writer
+
+ # Where to read commands from
+ reader = ConnectionReader(ui_fds[1])
+ fds.append(reader)
+ self.command_channel = reader
+
+ # Where to send command return values to
+ writer = ConnectionWriter(ui_fds[2])
+ self.command_channel_reply = writer
+
+ self.haveui = True
+
+ except (EOFError, OSError):
+ disconnect_client(self, fds)
+
+ if not self.timeout == -1.0 and not self.haveui and self.lastui and self.timeout and \
+ (self.lastui + self.timeout) < time.time():
+ print("Server timeout, exiting.")
+ self.quit = True
+
+ if self.command_channel in ready:
+ try:
+ command = self.command_channel.get()
+ except EOFError:
+ # Client connection shutting down
+ ready = []
+ disconnect_client(self, fds)
+ continue
+ if command[0] == "terminateServer":
self.quit = True
+ continue
+ try:
+ print("Running command %s" % command)
+ self.command_channel_reply.send(self.cooker.command.runCommand(command))
+ except Exception as e:
+ logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
+
+ if self.xmlrpc in ready:
+ self.xmlrpc.handle_requests()
+
+ ready = self.idle_commands(.1, fds)
+
+ print("Exiting")
+ # Remove the socket file so we don't get any more connections to avoid races
+ os.unlink(self.sockname)
+ self.sock.close()
+
+ try:
+ self.cooker.shutdown(True)
+ except:
+ pass
+
+ self.cooker.post_serve()
+
+ # Finally release the lockfile but warn about other processes holding it open
+ lock = self.bitbake_lock
+ lockfile = lock.name
+ lock.close()
+ lock = None
+
+ while not lock:
+ with bb.utils.timeout(3):
+ lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=True)
+ if not lock:
+ # Some systems may not have lsof available
+ procs = None
try:
- self.runCommand(["stateForceShutdown"])
- except:
- pass
+ procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
+ if procs is None:
+ # Fall back to fuser if lsof is unavailable
+ try:
+ procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
+ except OSError as e:
+ if e.errno != errno.ENOENT:
+ raise
- self.idle_commands(.1, [self.command_channel, self.quitout])
- except Exception:
- logger.exception('Running command %s', command)
-
- self.event_queue.close()
- bb.event.unregister_UIHhandler(self.event_handle.value)
- self.command_channel.close()
- self.cooker.shutdown(True)
- self.quitout.close()
+ msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
+ if procs:
+ msg += ":\n%s" % str(procs)
+ print(msg)
+ return
+ # We hold the lock so we can remove the file (hide stale pid data)
+ bb.utils.remove(lockfile)
+ bb.utils.unlockfile(lock)
def idle_commands(self, delay, fds=None):
nextsleep = delay
@@ -186,109 +306,317 @@
nextsleep = self.next_heartbeat - now
if nextsleep is not None:
- select.select(fds,[],[],nextsleep)
+ if self.xmlrpc:
+ nextsleep = self.xmlrpc.get_timeout(nextsleep)
+ try:
+ return select.select(fds,[],[],nextsleep)[0]
+ except InterruptedError:
+ # Ignore EINTR
+ return []
+ else:
+ return select.select(fds,[],[],0)[0]
+
+
+class ServerCommunicator():
+ def __init__(self, connection, recv):
+ self.connection = connection
+ self.recv = recv
def runCommand(self, command):
- """
- Run a cooker command on the server
- """
- self.command_channel.send(self.cooker.command.runCommand(command))
+ self.connection.send(command)
+ if not self.recv.poll(30):
+ raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server")
+ return self.recv.get()
- def stop(self):
- self.quitin.send("quit")
- self.quitin.close()
-
-class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
- def __init__(self, serverImpl, ui_channel, event_queue):
- self.procserver = serverImpl
- self.ui_channel = ui_channel
- self.event_queue = event_queue
- self.connection = ServerCommunicator(self.ui_channel, self.procserver.event_handle, self.procserver)
- self.events = self.event_queue
- self.terminated = False
-
- def sigterm_terminate(self):
- bb.error("UI received SIGTERM")
- self.terminate()
-
- def terminate(self):
- if self.terminated:
- return
- self.terminated = True
- def flushevents():
- while True:
- try:
- event = self.event_queue.get(block=False)
- except (Empty, IOError):
- break
- if isinstance(event, logging.LogRecord):
- logger.handle(event)
-
- self.procserver.stop()
-
- while self.procserver.is_alive():
- flushevents()
- self.procserver.join(0.1)
-
- self.ui_channel.close()
- self.event_queue.close()
- self.event_queue.setexit()
- # XXX: Call explicity close in _writer to avoid
- # fd leakage because isn't called on Queue.close()
- self.event_queue._writer.close()
-
-# Wrap Queue to provide API which isn't server implementation specific
-class ProcessEventQueue(multiprocessing.queues.Queue):
- def __init__(self, maxsize):
- multiprocessing.queues.Queue.__init__(self, maxsize, ctx=multiprocessing.get_context())
- self.exit = False
- bb.utils.set_process_name("ProcessEQueue")
-
- def setexit(self):
- self.exit = True
-
- def waitEvent(self, timeout):
- if self.exit:
- return self.getEvent()
- try:
- if not self.server.is_alive():
- return self.getEvent()
- return self.get(True, timeout)
- except Empty:
- return None
-
- def getEvent(self):
- try:
- if not self.server.is_alive():
- self.setexit()
- return self.get(False)
- except Empty:
- if self.exit:
- sys.exit(1)
- return None
-
-class BitBakeServer(BitBakeBaseServer):
- def initServer(self, single_use=True):
- # establish communication channels. We use bidirectional pipes for
- # ui <--> server command/response pairs
- # and a queue for server -> ui event notifications
- #
- self.ui_channel, self.server_channel = Pipe()
- self.event_queue = ProcessEventQueue(0)
- self.serverImpl = ProcessServer(self.server_channel, self.event_queue, None)
- self.event_queue.server = self.serverImpl
-
- def detach(self):
- self.serverImpl.start()
- return
-
- def establishConnection(self, featureset):
-
- self.connection = BitBakeProcessServerConnection(self.serverImpl, self.ui_channel, self.event_queue)
-
- _, error = self.connection.connection.runCommand(["setFeatures", featureset])
+ def updateFeatureSet(self, featureset):
+ _, error = self.runCommand(["setFeatures", featureset])
if error:
logger.error("Unable to set the cooker to the correct featureset: %s" % error)
raise BaseException(error)
- signal.signal(signal.SIGTERM, lambda i, s: self.connection.sigterm_terminate())
- return self.connection
+
+ def getEventHandle(self):
+ handle, error = self.runCommand(["getUIHandlerNum"])
+ if error:
+ logger.error("Unable to get UI Handler Number: %s" % error)
+ raise BaseException(error)
+
+ return handle
+
+ def terminateServer(self):
+ self.connection.send(['terminateServer'])
+ return
+
+class BitBakeProcessServerConnection(object):
+ def __init__(self, ui_channel, recv, eq, sock):
+ self.connection = ServerCommunicator(ui_channel, recv)
+ self.events = eq
+ # Save sock so it doesn't get gc'd for the life of our connection
+ self.socket_connection = sock
+
+ def terminate(self):
+ self.socket_connection.close()
+ self.connection.connection.close()
+ self.connection.recv.close()
+ return
+
+class BitBakeServer(object):
+ start_log_format = '--- Starting bitbake server pid %s at %s ---'
+ start_log_datetime_format = '%Y-%m-%d %H:%M:%S.%f'
+
+ def __init__(self, lock, sockname, configuration, featureset):
+
+ self.configuration = configuration
+ self.featureset = featureset
+ self.sockname = sockname
+ self.bitbake_lock = lock
+ self.readypipe, self.readypipein = os.pipe()
+
+ # Create server control socket
+ if os.path.exists(sockname):
+ os.unlink(sockname)
+
+ self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+ # AF_UNIX has path length issues so chdir here to workaround
+ cwd = os.getcwd()
+ logfile = os.path.join(cwd, "bitbake-cookerdaemon.log")
+
+ try:
+ os.chdir(os.path.dirname(sockname))
+ self.sock.bind(os.path.basename(sockname))
+ finally:
+ os.chdir(cwd)
+ self.sock.listen(1)
+
+ os.set_inheritable(self.sock.fileno(), True)
+ startdatetime = datetime.datetime.now()
+ bb.daemonize.createDaemon(self._startServer, logfile)
+ self.sock.close()
+ self.bitbake_lock.close()
+
+ ready = ConnectionReader(self.readypipe)
+ r = ready.poll(30)
+ if r:
+ r = ready.get()
+ if not r or r != "ready":
+ ready.close()
+ bb.error("Unable to start bitbake server")
+ if os.path.exists(logfile):
+ logstart_re = re.compile(self.start_log_format % ('([0-9]+)', '([0-9-]+ [0-9:.]+)'))
+ started = False
+ lines = []
+ with open(logfile, "r") as f:
+ for line in f:
+ if started:
+ lines.append(line)
+ else:
+ res = logstart_re.match(line.rstrip())
+ if res:
+ ldatetime = datetime.datetime.strptime(res.group(2), self.start_log_datetime_format)
+ if ldatetime >= startdatetime:
+ started = True
+ lines.append(line)
+ if lines:
+ if len(lines) > 10:
+ bb.error("Last 10 lines of server log for this session (%s):\n%s" % (logfile, "".join(lines[-10:])))
+ else:
+ bb.error("Server log for this session (%s):\n%s" % (logfile, "".join(lines)))
+ raise SystemExit(1)
+ ready.close()
+ os.close(self.readypipein)
+
+ def _startServer(self):
+ print(self.start_log_format % (os.getpid(), datetime.datetime.now().strftime(self.start_log_datetime_format)))
+ server = ProcessServer(self.bitbake_lock, self.sock, self.sockname)
+ self.configuration.setServerRegIdleCallback(server.register_idle_function)
+ writer = ConnectionWriter(self.readypipein)
+ try:
+ self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
+ writer.send("ready")
+ except:
+ writer.send("fail")
+ raise
+ finally:
+ os.close(self.readypipein)
+ server.cooker = self.cooker
+ server.server_timeout = self.configuration.server_timeout
+ server.xmlrpcinterface = self.configuration.xmlrpcinterface
+ print("Started bitbake server pid %d" % os.getpid())
+ server.start()
+
+def connectProcessServer(sockname, featureset):
+ # Connect to socket
+ sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+ # AF_UNIX has path length issues so chdir here to workaround
+ cwd = os.getcwd()
+
+ try:
+ os.chdir(os.path.dirname(sockname))
+ sock.connect(os.path.basename(sockname))
+ finally:
+ os.chdir(cwd)
+
+ readfd = writefd = readfd1 = writefd1 = readfd2 = writefd2 = None
+ eq = command_chan_recv = command_chan = None
+
+ try:
+
+ # Send an fd for the remote to write events to
+ readfd, writefd = os.pipe()
+ eq = BBUIEventQueue(readfd)
+ # Send an fd for the remote to recieve commands from
+ readfd1, writefd1 = os.pipe()
+ command_chan = ConnectionWriter(writefd1)
+ # Send an fd for the remote to write commands results to
+ readfd2, writefd2 = os.pipe()
+ command_chan_recv = ConnectionReader(readfd2)
+
+ sendfds(sock, [writefd, readfd1, writefd2])
+
+ server_connection = BitBakeProcessServerConnection(command_chan, command_chan_recv, eq, sock)
+
+ # Close the ends of the pipes we won't use
+ for i in [writefd, readfd1, writefd2]:
+ os.close(i)
+
+ server_connection.connection.updateFeatureSet(featureset)
+
+ except (Exception, SystemExit) as e:
+ if command_chan_recv:
+ command_chan_recv.close()
+ if command_chan:
+ command_chan.close()
+ for i in [writefd, readfd1, writefd2]:
+ try:
+ os.close(i)
+ except OSError:
+ pass
+ sock.close()
+ raise
+
+ return server_connection
+
+def sendfds(sock, fds):
+ '''Send an array of fds over an AF_UNIX socket.'''
+ fds = array.array('i', fds)
+ msg = bytes([len(fds) % 256])
+ sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)])
+
+def recvfds(sock, size):
+ '''Receive an array of fds over an AF_UNIX socket.'''
+ a = array.array('i')
+ bytes_size = a.itemsize * size
+ msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_LEN(bytes_size))
+ if not msg and not ancdata:
+ raise EOFError
+ try:
+ if len(ancdata) != 1:
+ raise RuntimeError('received %d items of ancdata' %
+ len(ancdata))
+ cmsg_level, cmsg_type, cmsg_data = ancdata[0]
+ if (cmsg_level == socket.SOL_SOCKET and
+ cmsg_type == socket.SCM_RIGHTS):
+ if len(cmsg_data) % a.itemsize != 0:
+ raise ValueError
+ a.frombytes(cmsg_data)
+ assert len(a) % 256 == msg[0]
+ return list(a)
+ except (ValueError, IndexError):
+ pass
+ raise RuntimeError('Invalid data received')
+
+class BBUIEventQueue:
+ def __init__(self, readfd):
+
+ self.eventQueue = []
+ self.eventQueueLock = threading.Lock()
+ self.eventQueueNotify = threading.Event()
+
+ self.reader = ConnectionReader(readfd)
+
+ self.t = threading.Thread()
+ self.t.setDaemon(True)
+ self.t.run = self.startCallbackHandler
+ self.t.start()
+
+ def getEvent(self):
+ self.eventQueueLock.acquire()
+
+ if len(self.eventQueue) == 0:
+ self.eventQueueLock.release()
+ return None
+
+ item = self.eventQueue.pop(0)
+
+ if len(self.eventQueue) == 0:
+ self.eventQueueNotify.clear()
+
+ self.eventQueueLock.release()
+ return item
+
+ def waitEvent(self, delay):
+ self.eventQueueNotify.wait(delay)
+ return self.getEvent()
+
+ def queue_event(self, event):
+ self.eventQueueLock.acquire()
+ self.eventQueue.append(event)
+ self.eventQueueNotify.set()
+ self.eventQueueLock.release()
+
+ def send_event(self, event):
+ self.queue_event(pickle.loads(event))
+
+ def startCallbackHandler(self):
+ bb.utils.set_process_name("UIEventQueue")
+ while True:
+ try:
+ self.reader.wait()
+ event = self.reader.get()
+ self.queue_event(event)
+ except EOFError:
+ # Easiest way to exit is to close the file descriptor to cause an exit
+ break
+ self.reader.close()
+
+class ConnectionReader(object):
+
+ def __init__(self, fd):
+ self.reader = multiprocessing.connection.Connection(fd, writable=False)
+ self.rlock = multiprocessing.Lock()
+
+ def wait(self, timeout=None):
+ return multiprocessing.connection.wait([self.reader], timeout)
+
+ def poll(self, timeout=None):
+ return self.reader.poll(timeout)
+
+ def get(self):
+ with self.rlock:
+ res = self.reader.recv_bytes()
+ return multiprocessing.reduction.ForkingPickler.loads(res)
+
+ def fileno(self):
+ return self.reader.fileno()
+
+ def close(self):
+ return self.reader.close()
+
+
+class ConnectionWriter(object):
+
+ def __init__(self, fd):
+ self.writer = multiprocessing.connection.Connection(fd, readable=False)
+ self.wlock = multiprocessing.Lock()
+ # Why bb.event needs this I have no idea
+ self.event = self
+
+ def send(self, obj):
+ obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
+ with self.wlock:
+ self.writer.send_bytes(obj)
+
+ def fileno(self):
+ return self.writer.fileno()
+
+ def close(self):
+ return self.writer.close()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpc.py b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpc.py
deleted file mode 100644
index a06007f..0000000
--- a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpc.py
+++ /dev/null
@@ -1,422 +0,0 @@
-#
-# BitBake XMLRPC Server
-#
-# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
-# Copyright (C) 2006 - 2008 Richard Purdie
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License version 2 as
-# published by the Free Software Foundation.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License along
-# with this program; if not, write to the Free Software Foundation, Inc.,
-# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-
-"""
- This module implements an xmlrpc server for BitBake.
-
- Use this by deriving a class from BitBakeXMLRPCServer and then adding
- methods which you want to "export" via XMLRPC. If the methods have the
- prefix xmlrpc_, then registering those function will happen automatically,
- if not, you need to call register_function.
-
- Use register_idle_function() to add a function which the xmlrpc server
- calls from within server_forever when no requests are pending. Make sure
- that those functions are non-blocking or else you will introduce latency
- in the server's main loop.
-"""
-
-import os
-import sys
-
-import hashlib
-import time
-import socket
-import signal
-import threading
-import pickle
-import inspect
-import select
-import http.client
-import xmlrpc.client
-from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
-
-import bb
-from bb import daemonize
-from bb.ui import uievent
-from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
-
-DEBUG = False
-
-class BBTransport(xmlrpc.client.Transport):
- def __init__(self, timeout):
- self.timeout = timeout
- self.connection_token = None
- xmlrpc.client.Transport.__init__(self)
-
- # Modified from default to pass timeout to HTTPConnection
- def make_connection(self, host):
- #return an existing connection if possible. This allows
- #HTTP/1.1 keep-alive.
- if self._connection and host == self._connection[0]:
- return self._connection[1]
-
- # create a HTTP connection object from a host descriptor
- chost, self._extra_headers, x509 = self.get_host_info(host)
- #store the host argument along with the connection object
- self._connection = host, http.client.HTTPConnection(chost, timeout=self.timeout)
- return self._connection[1]
-
- def set_connection_token(self, token):
- self.connection_token = token
-
- def send_content(self, h, body):
- if self.connection_token:
- h.putheader("Bitbake-token", self.connection_token)
- xmlrpc.client.Transport.send_content(self, h, body)
-
-def _create_server(host, port, timeout = 60):
- t = BBTransport(timeout)
- s = xmlrpc.client.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True, use_builtin_types=True)
- return s, t
-
-def check_connection(remote, timeout):
- try:
- host, port = remote.split(":")
- port = int(port)
- except Exception as e:
- bb.warn("Failed to read remote definition (%s)" % str(e))
- raise e
-
- server, _transport = _create_server(host, port, timeout)
- try:
- ret, err = server.runCommand(['getVariable', 'TOPDIR'])
- if err or not ret:
- return False
- except ConnectionError:
- return False
- return True
-
-class BitBakeServerCommands():
-
- def __init__(self, server):
- self.server = server
- self.has_client = False
-
- def registerEventHandler(self, host, port):
- """
- Register a remote UI Event Handler
- """
- s, t = _create_server(host, port)
-
- # we don't allow connections if the cooker is running
- if (self.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
- return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.cooker.state)
-
- self.event_handle = bb.event.register_UIHhandler(s, True)
- return self.event_handle, 'OK'
-
- def unregisterEventHandler(self, handlerNum):
- """
- Unregister a remote UI Event Handler
- """
- return bb.event.unregister_UIHhandler(handlerNum)
-
- def runCommand(self, command):
- """
- Run a cooker command on the server
- """
- return self.cooker.command.runCommand(command, self.server.readonly)
-
- def getEventHandle(self):
- return self.event_handle
-
- def terminateServer(self):
- """
- Trigger the server to quit
- """
- self.server.quit = True
- print("Server (cooker) exiting")
- return
-
- def addClient(self):
- if self.has_client:
- return None
- token = hashlib.md5(str(time.time()).encode("utf-8")).hexdigest()
- self.server.set_connection_token(token)
- self.has_client = True
- return token
-
- def removeClient(self):
- if self.has_client:
- self.server.set_connection_token(None)
- self.has_client = False
- if self.server.single_use:
- self.server.quit = True
-
-# This request handler checks if the request has a "Bitbake-token" header
-# field (this comes from the client side) and compares it with its internal
-# "Bitbake-token" field (this comes from the server). If the two are not
-# equal, it is assumed that a client is trying to connect to the server
-# while another client is connected to the server. In this case, a 503 error
-# ("service unavailable") is returned to the client.
-class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
- def __init__(self, request, client_address, server):
- self.server = server
- SimpleXMLRPCRequestHandler.__init__(self, request, client_address, server)
-
- def do_POST(self):
- try:
- remote_token = self.headers["Bitbake-token"]
- except:
- remote_token = None
- if remote_token != self.server.connection_token and remote_token != "observer":
- self.report_503()
- else:
- if remote_token == "observer":
- self.server.readonly = True
- else:
- self.server.readonly = False
- SimpleXMLRPCRequestHandler.do_POST(self)
-
- def report_503(self):
- self.send_response(503)
- response = 'No more client allowed'
- self.send_header("Content-type", "text/plain")
- self.send_header("Content-length", str(len(response)))
- self.end_headers()
- self.wfile.write(bytes(response, 'utf-8'))
-
-
-class XMLRPCProxyServer(BaseImplServer):
- """ not a real working server, but a stub for a proxy server connection
-
- """
- def __init__(self, host, port, use_builtin_types=True):
- self.host = host
- self.port = port
-
-class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
- # remove this when you're done with debugging
- # allow_reuse_address = True
-
- def __init__(self, interface, single_use=False, idle_timeout=0):
- """
- Constructor
- """
- BaseImplServer.__init__(self)
- self.single_use = single_use
- # Use auto port configuration
- if (interface[1] == -1):
- interface = (interface[0], 0)
- SimpleXMLRPCServer.__init__(self, interface,
- requestHandler=BitBakeXMLRPCRequestHandler,
- logRequests=False, allow_none=True)
- self.host, self.port = self.socket.getsockname()
- self.connection_token = None
- #self.register_introspection_functions()
- self.commands = BitBakeServerCommands(self)
- self.autoregister_all_functions(self.commands, "")
- self.interface = interface
- self.time = time.time()
- self.idle_timeout = idle_timeout
- if idle_timeout:
- self.register_idle_function(self.handle_idle_timeout, self)
-
- def addcooker(self, cooker):
- BaseImplServer.addcooker(self, cooker)
- self.commands.cooker = cooker
-
- def autoregister_all_functions(self, context, prefix):
- """
- Convenience method for registering all functions in the scope
- of this class that start with a common prefix
- """
- methodlist = inspect.getmembers(context, inspect.ismethod)
- for name, method in methodlist:
- if name.startswith(prefix):
- self.register_function(method, name[len(prefix):])
-
- def handle_idle_timeout(self, server, data, abort):
- if not abort:
- if time.time() - server.time > server.idle_timeout:
- server.quit = True
- print("Server idle timeout expired")
- return []
-
- def serve_forever(self):
- # Start the actual XMLRPC server
- bb.cooker.server_main(self.cooker, self._serve_forever)
-
- def _serve_forever(self):
- """
- Serve Requests. Overloaded to honor a quit command
- """
- self.quit = False
- while not self.quit:
- fds = [self]
- nextsleep = 0.1
- for function, data in list(self._idlefuns.items()):
- retval = None
- try:
- retval = function(self, data, False)
- if retval is False:
- del self._idlefuns[function]
- elif retval is True:
- nextsleep = 0
- elif isinstance(retval, float):
- if (retval < nextsleep):
- nextsleep = retval
- else:
- fds = fds + retval
- except SystemExit:
- raise
- except:
- import traceback
- traceback.print_exc()
- if retval == None:
- # the function execute failed; delete it
- del self._idlefuns[function]
- pass
-
- socktimeout = self.socket.gettimeout() or nextsleep
- socktimeout = min(socktimeout, nextsleep)
- # Mirror what BaseServer handle_request would do
- try:
- fd_sets = select.select(fds, [], [], socktimeout)
- if fd_sets[0] and self in fd_sets[0]:
- if self.idle_timeout:
- self.time = time.time()
- self._handle_request_noblock()
- except IOError:
- # we ignore interrupted calls
- pass
-
- # Tell idle functions we're exiting
- for function, data in list(self._idlefuns.items()):
- try:
- retval = function(self, data, True)
- except:
- pass
- self.server_close()
- return
-
- def set_connection_token(self, token):
- self.connection_token = token
-
-class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
- def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = None):
- self.connection, self.transport = _create_server(serverImpl.host, serverImpl.port)
- self.clientinfo = clientinfo
- self.serverImpl = serverImpl
- self.observer_only = observer_only
- if featureset:
- self.featureset = featureset
- else:
- self.featureset = []
-
- def connect(self, token = None):
- if token is None:
- if self.observer_only:
- token = "observer"
- else:
- token = self.connection.addClient()
-
- if token is None:
- return None
-
- self.transport.set_connection_token(token)
- return self
-
- def setupEventQueue(self):
- self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
- for event in bb.event.ui_queue:
- self.events.queue_event(event)
-
- _, error = self.connection.runCommand(["setFeatures", self.featureset])
- if error:
- # disconnect the client, we can't make the setFeature work
- self.connection.removeClient()
- # no need to log it here, the error shall be sent to the client
- raise BaseException(error)
-
- def removeClient(self):
- if not self.observer_only:
- self.connection.removeClient()
-
- def terminate(self):
- # Don't wait for server indefinitely
- import socket
- socket.setdefaulttimeout(2)
- try:
- self.events.system_quit()
- except:
- pass
- try:
- self.connection.removeClient()
- except:
- pass
-
-class BitBakeServer(BitBakeBaseServer):
- def initServer(self, interface = ("localhost", 0),
- single_use = False, idle_timeout=0):
- self.interface = interface
- self.serverImpl = XMLRPCServer(interface, single_use, idle_timeout)
-
- def detach(self):
- daemonize.createDaemon(self.serverImpl.serve_forever, "bitbake-cookerdaemon.log")
- del self.cooker
-
- def establishConnection(self, featureset):
- self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, self.interface, False, featureset)
- return self.connection.connect()
-
- def set_connection_token(self, token):
- self.connection.transport.set_connection_token(token)
-
-class BitBakeXMLRPCClient(BitBakeBaseServer):
-
- def __init__(self, observer_only = False, token = None):
- self.token = token
-
- self.observer_only = observer_only
- # if we need extra caches, just tell the server to load them all
- pass
-
- def saveConnectionDetails(self, remote):
- self.remote = remote
-
- def establishConnection(self, featureset):
- # The format of "remote" must be "server:port"
- try:
- [host, port] = self.remote.split(":")
- port = int(port)
- except Exception as e:
- bb.warn("Failed to read remote definition (%s)" % str(e))
- raise e
-
- # We need our IP for the server connection. We get the IP
- # by trying to connect with the server
- try:
- s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
- s.connect((host, port))
- ip = s.getsockname()[0]
- s.close()
- except Exception as e:
- bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
- raise e
- try:
- self.serverImpl = XMLRPCProxyServer(host, port, use_builtin_types=True)
- self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, (ip, 0), self.observer_only, featureset)
- return self.connection.connect(self.token)
- except Exception as e:
- bb.warn("Could not connect to server at %s:%s (%s)" % (host, port, str(e)))
- raise e
-
- def endSession(self):
- self.connection.removeClient()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcclient.py b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcclient.py
new file mode 100644
index 0000000..4661a9e
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcclient.py
@@ -0,0 +1,154 @@
+#
+# BitBake XMLRPC Client Interface
+#
+# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
+# Copyright (C) 2006 - 2008 Richard Purdie
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+import os
+import sys
+
+import socket
+import http.client
+import xmlrpc.client
+
+import bb
+from bb.ui import uievent
+
+class BBTransport(xmlrpc.client.Transport):
+ def __init__(self, timeout):
+ self.timeout = timeout
+ self.connection_token = None
+ xmlrpc.client.Transport.__init__(self)
+
+ # Modified from default to pass timeout to HTTPConnection
+ def make_connection(self, host):
+ #return an existing connection if possible. This allows
+ #HTTP/1.1 keep-alive.
+ if self._connection and host == self._connection[0]:
+ return self._connection[1]
+
+ # create a HTTP connection object from a host descriptor
+ chost, self._extra_headers, x509 = self.get_host_info(host)
+ #store the host argument along with the connection object
+ self._connection = host, http.client.HTTPConnection(chost, timeout=self.timeout)
+ return self._connection[1]
+
+ def set_connection_token(self, token):
+ self.connection_token = token
+
+ def send_content(self, h, body):
+ if self.connection_token:
+ h.putheader("Bitbake-token", self.connection_token)
+ xmlrpc.client.Transport.send_content(self, h, body)
+
+def _create_server(host, port, timeout = 60):
+ t = BBTransport(timeout)
+ s = xmlrpc.client.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True, use_builtin_types=True)
+ return s, t
+
+def check_connection(remote, timeout):
+ try:
+ host, port = remote.split(":")
+ port = int(port)
+ except Exception as e:
+ bb.warn("Failed to read remote definition (%s)" % str(e))
+ raise e
+
+ server, _transport = _create_server(host, port, timeout)
+ try:
+ ret, err = server.runCommand(['getVariable', 'TOPDIR'])
+ if err or not ret:
+ return False
+ except ConnectionError:
+ return False
+ return True
+
+class BitBakeXMLRPCServerConnection(object):
+ def __init__(self, host, port, clientinfo=("localhost", 0), observer_only = False, featureset = None):
+ self.connection, self.transport = _create_server(host, port)
+ self.clientinfo = clientinfo
+ self.observer_only = observer_only
+ if featureset:
+ self.featureset = featureset
+ else:
+ self.featureset = []
+
+ self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
+
+ _, error = self.connection.runCommand(["setFeatures", self.featureset])
+ if error:
+ # disconnect the client, we can't make the setFeature work
+ self.connection.removeClient()
+ # no need to log it here, the error shall be sent to the client
+ raise BaseException(error)
+
+ def connect(self, token = None):
+ if token is None:
+ if self.observer_only:
+ token = "observer"
+ else:
+ token = self.connection.addClient()
+
+ if token is None:
+ return None
+
+ self.transport.set_connection_token(token)
+ return self
+
+ def removeClient(self):
+ if not self.observer_only:
+ self.connection.removeClient()
+
+ def terminate(self):
+ # Don't wait for server indefinitely
+ socket.setdefaulttimeout(2)
+ try:
+ self.events.system_quit()
+ except:
+ pass
+ try:
+ self.connection.removeClient()
+ except:
+ pass
+
+def connectXMLRPC(remote, featureset, observer_only = False, token = None):
+ # The format of "remote" must be "server:port"
+ try:
+ [host, port] = remote.split(":")
+ port = int(port)
+ except Exception as e:
+ bb.warn("Failed to parse remote definition %s (%s)" % (remote, str(e)))
+ raise e
+
+ # We need our IP for the server connection. We get the IP
+ # by trying to connect with the server
+ try:
+ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
+ s.connect((host, port))
+ ip = s.getsockname()[0]
+ s.close()
+ except Exception as e:
+ bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
+ raise e
+ try:
+ connection = BitBakeXMLRPCServerConnection(host, port, (ip, 0), observer_only, featureset)
+ return connection.connect(token)
+ except Exception as e:
+ bb.warn("Could not connect to server at %s:%s (%s)" % (host, port, str(e)))
+ raise e
+
+
+
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcserver.py b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcserver.py
new file mode 100644
index 0000000..875b128
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcserver.py
@@ -0,0 +1,158 @@
+#
+# BitBake XMLRPC Server Interface
+#
+# Copyright (C) 2006 - 2007 Michael 'Mickey' Lauer
+# Copyright (C) 2006 - 2008 Richard Purdie
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+import os
+import sys
+
+import hashlib
+import time
+import inspect
+from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
+
+import bb
+
+# This request handler checks if the request has a "Bitbake-token" header
+# field (this comes from the client side) and compares it with its internal
+# "Bitbake-token" field (this comes from the server). If the two are not
+# equal, it is assumed that a client is trying to connect to the server
+# while another client is connected to the server. In this case, a 503 error
+# ("service unavailable") is returned to the client.
+class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
+ def __init__(self, request, client_address, server):
+ self.server = server
+ SimpleXMLRPCRequestHandler.__init__(self, request, client_address, server)
+
+ def do_POST(self):
+ try:
+ remote_token = self.headers["Bitbake-token"]
+ except:
+ remote_token = None
+ if 0 and remote_token != self.server.connection_token and remote_token != "observer":
+ self.report_503()
+ else:
+ if remote_token == "observer":
+ self.server.readonly = True
+ else:
+ self.server.readonly = False
+ SimpleXMLRPCRequestHandler.do_POST(self)
+
+ def report_503(self):
+ self.send_response(503)
+ response = 'No more client allowed'
+ self.send_header("Content-type", "text/plain")
+ self.send_header("Content-length", str(len(response)))
+ self.end_headers()
+ self.wfile.write(bytes(response, 'utf-8'))
+
+class BitBakeXMLRPCServer(SimpleXMLRPCServer):
+ # remove this when you're done with debugging
+ # allow_reuse_address = True
+
+ def __init__(self, interface, cooker, parent):
+ # Use auto port configuration
+ if (interface[1] == -1):
+ interface = (interface[0], 0)
+ SimpleXMLRPCServer.__init__(self, interface,
+ requestHandler=BitBakeXMLRPCRequestHandler,
+ logRequests=False, allow_none=True)
+ self.host, self.port = self.socket.getsockname()
+ self.interface = interface
+
+ self.connection_token = None
+ self.commands = BitBakeXMLRPCServerCommands(self)
+ self.register_functions(self.commands, "")
+
+ self.cooker = cooker
+ self.parent = parent
+
+
+ def register_functions(self, context, prefix):
+ """
+ Convenience method for registering all functions in the scope
+ of this class that start with a common prefix
+ """
+ methodlist = inspect.getmembers(context, inspect.ismethod)
+ for name, method in methodlist:
+ if name.startswith(prefix):
+ self.register_function(method, name[len(prefix):])
+
+ def get_timeout(self, delay):
+ socktimeout = self.socket.gettimeout() or delay
+ return min(socktimeout, delay)
+
+ def handle_requests(self):
+ self._handle_request_noblock()
+
+class BitBakeXMLRPCServerCommands():
+
+ def __init__(self, server):
+ self.server = server
+ self.has_client = False
+
+ def registerEventHandler(self, host, port):
+ """
+ Register a remote UI Event Handler
+ """
+ s, t = bb.server.xmlrpcclient._create_server(host, port)
+
+ # we don't allow connections if the cooker is running
+ if (self.server.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
+ return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.server.cooker.state)
+
+ self.event_handle = bb.event.register_UIHhandler(s, True)
+ return self.event_handle, 'OK'
+
+ def unregisterEventHandler(self, handlerNum):
+ """
+ Unregister a remote UI Event Handler
+ """
+ ret = bb.event.unregister_UIHhandler(handlerNum, True)
+ self.event_handle = None
+ return ret
+
+ def runCommand(self, command):
+ """
+ Run a cooker command on the server
+ """
+ return self.server.cooker.command.runCommand(command, self.server.readonly)
+
+ def getEventHandle(self):
+ return self.event_handle
+
+ def terminateServer(self):
+ """
+ Trigger the server to quit
+ """
+ self.server.parent.quit = True
+ print("XMLRPC Server triggering exit")
+ return
+
+ def addClient(self):
+ if self.server.parent.haveui:
+ return None
+ token = hashlib.md5(str(time.time()).encode("utf-8")).hexdigest()
+ self.server.connection_token = token
+ self.server.parent.haveui = True
+ return token
+
+ def removeClient(self):
+ if self.server.parent.haveui:
+ self.server.connection_token = None
+ self.server.parent.haveui = False
+
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/siggen.py b/import-layers/yocto-poky/bitbake/lib/bb/siggen.py
index f71190a..5ef82d7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/siggen.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/siggen.py
@@ -69,6 +69,10 @@
def set_taskdata(self, data):
self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash = data
+ def reset(self, data):
+ self.__init__(data)
+
+
class SignatureGeneratorBasic(SignatureGenerator):
"""
"""
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py b/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py
index 8c96a56..0ea6c0b 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py
@@ -47,7 +47,7 @@
"""
BitBake Task Data implementation
"""
- def __init__(self, abort = True, tryaltconfigs = False, skiplist = None, allowincomplete = False):
+ def __init__(self, abort = True, skiplist = None, allowincomplete = False):
self.build_targets = {}
self.run_targets = {}
@@ -66,7 +66,6 @@
self.failed_fns = []
self.abort = abort
- self.tryaltconfigs = tryaltconfigs
self.allowincomplete = allowincomplete
self.skiplist = skiplist
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tests/event.py b/import-layers/yocto-poky/bitbake/lib/bb/tests/event.py
new file mode 100644
index 0000000..c7eb1fe
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tests/event.py
@@ -0,0 +1,377 @@
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# BitBake Tests for the Event implementation (event.py)
+#
+# Copyright (C) 2017 Intel Corporation
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+
+import unittest
+import bb
+import logging
+import bb.compat
+import bb.event
+import importlib
+import threading
+import time
+import pickle
+from unittest.mock import Mock
+from unittest.mock import call
+
+
+class EventQueueStub():
+ """ Class used as specification for UI event handler queue stub objects """
+ def __init__(self):
+ return
+
+ def send(self, event):
+ return
+
+
+class PickleEventQueueStub():
+ """ Class used as specification for UI event handler queue stub objects
+ with sendpickle method """
+ def __init__(self):
+ return
+
+ def sendpickle(self, pickled_event):
+ return
+
+
+class UIClientStub():
+ """ Class used as specification for UI event handler stub objects """
+ def __init__(self):
+ self.event = None
+
+
+class EventHandlingTest(unittest.TestCase):
+ """ Event handling test class """
+ _threadlock_test_calls = []
+
+ def setUp(self):
+ self._test_process = Mock()
+ ui_client1 = UIClientStub()
+ ui_client2 = UIClientStub()
+ self._test_ui1 = Mock(wraps=ui_client1)
+ self._test_ui2 = Mock(wraps=ui_client2)
+ importlib.reload(bb.event)
+
+ def _create_test_handlers(self):
+ """ Method used to create a test handler ordered dictionary """
+ test_handlers = bb.compat.OrderedDict()
+ test_handlers["handler1"] = self._test_process.handler1
+ test_handlers["handler2"] = self._test_process.handler2
+ return test_handlers
+
+ def test_class_handlers(self):
+ """ Test set_class_handlers and get_class_handlers methods """
+ test_handlers = self._create_test_handlers()
+ bb.event.set_class_handlers(test_handlers)
+ self.assertEqual(test_handlers,
+ bb.event.get_class_handlers())
+
+ def test_handlers(self):
+ """ Test set_handlers and get_handlers """
+ test_handlers = self._create_test_handlers()
+ bb.event.set_handlers(test_handlers)
+ self.assertEqual(test_handlers,
+ bb.event.get_handlers())
+
+ def test_clean_class_handlers(self):
+ """ Test clean_class_handlers method """
+ cleanDict = bb.compat.OrderedDict()
+ self.assertEqual(cleanDict,
+ bb.event.clean_class_handlers())
+
+ def test_register(self):
+ """ Test register method for class handlers """
+ result = bb.event.register("handler", self._test_process.handler)
+ self.assertEqual(result, bb.event.Registered)
+ handlers_dict = bb.event.get_class_handlers()
+ self.assertIn("handler", handlers_dict)
+
+ def test_already_registered(self):
+ """ Test detection of an already registed class handler """
+ bb.event.register("handler", self._test_process.handler)
+ handlers_dict = bb.event.get_class_handlers()
+ self.assertIn("handler", handlers_dict)
+ result = bb.event.register("handler", self._test_process.handler)
+ self.assertEqual(result, bb.event.AlreadyRegistered)
+
+ def test_register_from_string(self):
+ """ Test register method receiving code in string """
+ result = bb.event.register("string_handler", " return True")
+ self.assertEqual(result, bb.event.Registered)
+ handlers_dict = bb.event.get_class_handlers()
+ self.assertIn("string_handler", handlers_dict)
+
+ def test_register_with_mask(self):
+ """ Test register method with event masking """
+ mask = ["bb.event.OperationStarted",
+ "bb.event.OperationCompleted"]
+ result = bb.event.register("event_handler",
+ self._test_process.event_handler,
+ mask)
+ self.assertEqual(result, bb.event.Registered)
+ handlers_dict = bb.event.get_class_handlers()
+ self.assertIn("event_handler", handlers_dict)
+
+ def test_remove(self):
+ """ Test remove method for class handlers """
+ test_handlers = self._create_test_handlers()
+ bb.event.set_class_handlers(test_handlers)
+ count = len(test_handlers)
+ bb.event.remove("handler1", None)
+ test_handlers = bb.event.get_class_handlers()
+ self.assertEqual(len(test_handlers), count - 1)
+ with self.assertRaises(KeyError):
+ bb.event.remove("handler1", None)
+
+ def test_execute_handler(self):
+ """ Test execute_handler method for class handlers """
+ mask = ["bb.event.OperationProgress"]
+ result = bb.event.register("event_handler",
+ self._test_process.event_handler,
+ mask)
+ self.assertEqual(result, bb.event.Registered)
+ event = bb.event.OperationProgress(current=10, total=100)
+ bb.event.execute_handler("event_handler",
+ self._test_process.event_handler,
+ event,
+ None)
+ self._test_process.event_handler.assert_called_once_with(event)
+
+ def test_fire_class_handlers(self):
+ """ Test fire_class_handlers method """
+ mask = ["bb.event.OperationStarted"]
+ result = bb.event.register("event_handler1",
+ self._test_process.event_handler1,
+ mask)
+ self.assertEqual(result, bb.event.Registered)
+ result = bb.event.register("event_handler2",
+ self._test_process.event_handler2,
+ "*")
+ self.assertEqual(result, bb.event.Registered)
+ event1 = bb.event.OperationStarted()
+ event2 = bb.event.OperationCompleted(total=123)
+ bb.event.fire_class_handlers(event1, None)
+ bb.event.fire_class_handlers(event2, None)
+ bb.event.fire_class_handlers(event2, None)
+ expected_event_handler1 = [call(event1)]
+ expected_event_handler2 = [call(event1),
+ call(event2),
+ call(event2)]
+ self.assertEqual(self._test_process.event_handler1.call_args_list,
+ expected_event_handler1)
+ self.assertEqual(self._test_process.event_handler2.call_args_list,
+ expected_event_handler2)
+
+ def test_change_handler_event_mapping(self):
+ """ Test changing the event mapping for class handlers """
+ event1 = bb.event.OperationStarted()
+ event2 = bb.event.OperationCompleted(total=123)
+
+ # register handler for all events
+ result = bb.event.register("event_handler1",
+ self._test_process.event_handler1,
+ "*")
+ self.assertEqual(result, bb.event.Registered)
+ bb.event.fire_class_handlers(event1, None)
+ bb.event.fire_class_handlers(event2, None)
+ expected = [call(event1), call(event2)]
+ self.assertEqual(self._test_process.event_handler1.call_args_list,
+ expected)
+
+ # unregister handler and register it only for OperationStarted
+ result = bb.event.remove("event_handler1",
+ self._test_process.event_handler1)
+ mask = ["bb.event.OperationStarted"]
+ result = bb.event.register("event_handler1",
+ self._test_process.event_handler1,
+ mask)
+ self.assertEqual(result, bb.event.Registered)
+ bb.event.fire_class_handlers(event1, None)
+ bb.event.fire_class_handlers(event2, None)
+ expected = [call(event1), call(event2), call(event1)]
+ self.assertEqual(self._test_process.event_handler1.call_args_list,
+ expected)
+
+ # unregister handler and register it only for OperationCompleted
+ result = bb.event.remove("event_handler1",
+ self._test_process.event_handler1)
+ mask = ["bb.event.OperationCompleted"]
+ result = bb.event.register("event_handler1",
+ self._test_process.event_handler1,
+ mask)
+ self.assertEqual(result, bb.event.Registered)
+ bb.event.fire_class_handlers(event1, None)
+ bb.event.fire_class_handlers(event2, None)
+ expected = [call(event1), call(event2), call(event1), call(event2)]
+ self.assertEqual(self._test_process.event_handler1.call_args_list,
+ expected)
+
+ def test_register_UIHhandler(self):
+ """ Test register_UIHhandler method """
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+
+ def test_UIHhandler_already_registered(self):
+ """ Test registering an UIHhandler already existing """
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 2)
+
+ def test_unregister_UIHhandler(self):
+ """ Test unregister_UIHhandler method """
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+ result = bb.event.unregister_UIHhandler(1)
+ self.assertIs(result, None)
+
+ def test_fire_ui_handlers(self):
+ """ Test fire_ui_handlers method """
+ self._test_ui1.event = Mock(spec_set=EventQueueStub)
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+ self._test_ui2.event = Mock(spec_set=PickleEventQueueStub)
+ result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
+ self.assertEqual(result, 2)
+ event1 = bb.event.OperationStarted()
+ bb.event.fire_ui_handlers(event1, None)
+ expected = [call(event1)]
+ self.assertEqual(self._test_ui1.event.send.call_args_list,
+ expected)
+ expected = [call(pickle.dumps(event1))]
+ self.assertEqual(self._test_ui2.event.sendpickle.call_args_list,
+ expected)
+
+ def test_fire(self):
+ """ Test fire method used to trigger class and ui event handlers """
+ mask = ["bb.event.ConfigParsed"]
+ result = bb.event.register("event_handler1",
+ self._test_process.event_handler1,
+ mask)
+
+ self._test_ui1.event = Mock(spec_set=EventQueueStub)
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+
+ event1 = bb.event.ConfigParsed()
+ bb.event.fire(event1, None)
+ expected = [call(event1)]
+ self.assertEqual(self._test_process.event_handler1.call_args_list,
+ expected)
+ self.assertEqual(self._test_ui1.event.send.call_args_list,
+ expected)
+
+ def test_fire_from_worker(self):
+ """ Test fire_from_worker method """
+ self._test_ui1.event = Mock(spec_set=EventQueueStub)
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+ event1 = bb.event.ConfigParsed()
+ bb.event.fire_from_worker(event1, None)
+ expected = [call(event1)]
+ self.assertEqual(self._test_ui1.event.send.call_args_list,
+ expected)
+
+ def test_print_ui_queue(self):
+ """ Test print_ui_queue method """
+ event1 = bb.event.OperationStarted()
+ event2 = bb.event.OperationCompleted(total=123)
+ bb.event.fire(event1, None)
+ bb.event.fire(event2, None)
+ logger = logging.getLogger("BitBake")
+ logger.addHandler(bb.event.LogHandler())
+ logger.info("Test info LogRecord")
+ logger.warning("Test warning LogRecord")
+ with self.assertLogs("BitBake", level="INFO") as cm:
+ bb.event.print_ui_queue()
+ self.assertEqual(cm.output,
+ ["INFO:BitBake:Test info LogRecord",
+ "WARNING:BitBake:Test warning LogRecord"])
+
+ def _set_threadlock_test_mockups(self):
+ """ Create UI event handler mockups used in enable and disable
+ threadlock tests """
+ def ui1_event_send(event):
+ if type(event) is bb.event.ConfigParsed:
+ self._threadlock_test_calls.append("w1_ui1")
+ if type(event) is bb.event.OperationStarted:
+ self._threadlock_test_calls.append("w2_ui1")
+ time.sleep(2)
+
+ def ui2_event_send(event):
+ if type(event) is bb.event.ConfigParsed:
+ self._threadlock_test_calls.append("w1_ui2")
+ if type(event) is bb.event.OperationStarted:
+ self._threadlock_test_calls.append("w2_ui2")
+ time.sleep(2)
+
+ self._threadlock_test_calls = []
+ self._test_ui1.event = EventQueueStub()
+ self._test_ui1.event.send = ui1_event_send
+ result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+ self.assertEqual(result, 1)
+ self._test_ui2.event = EventQueueStub()
+ self._test_ui2.event.send = ui2_event_send
+ result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
+ self.assertEqual(result, 2)
+
+ def _set_and_run_threadlock_test_workers(self):
+ """ Create and run the workers used to trigger events in enable and
+ disable threadlock tests """
+ worker1 = threading.Thread(target=self._thread_lock_test_worker1)
+ worker2 = threading.Thread(target=self._thread_lock_test_worker2)
+ worker1.start()
+ time.sleep(1)
+ worker2.start()
+ worker1.join()
+ worker2.join()
+
+ def _thread_lock_test_worker1(self):
+ """ First worker used to fire the ConfigParsed event for enable and
+ disable threadlocks tests """
+ bb.event.fire(bb.event.ConfigParsed(), None)
+
+ def _thread_lock_test_worker2(self):
+ """ Second worker used to fire the OperationStarted event for enable
+ and disable threadlocks tests """
+ bb.event.fire(bb.event.OperationStarted(), None)
+
+ def test_enable_threadlock(self):
+ """ Test enable_threadlock method """
+ self._set_threadlock_test_mockups()
+ bb.event.enable_threadlock()
+ self._set_and_run_threadlock_test_workers()
+ # Calls to UI handlers should be in order as all the registered
+ # handlers for the event coming from the first worker should be
+ # called before processing the event from the second worker.
+ self.assertEqual(self._threadlock_test_calls,
+ ["w1_ui1", "w1_ui2", "w2_ui1", "w2_ui2"])
+
+ def test_disable_threadlock(self):
+ """ Test disable_threadlock method """
+ self._set_threadlock_test_mockups()
+ bb.event.disable_threadlock()
+ self._set_and_run_threadlock_test_workers()
+ # Calls to UI handlers should be intertwined together. Thanks to the
+ # delay in the registered handlers for the event coming from the first
+ # worker, the event coming from the second worker starts being
+ # processed before finishing handling the first worker event.
+ self.assertEqual(self._threadlock_test_calls,
+ ["w1_ui1", "w2_ui1", "w1_ui2", "w2_ui2"])
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py b/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py
index 5a8d892..7d7c5d7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py
@@ -28,6 +28,11 @@
from bb.fetch2 import FetchMethod
import bb
+def skipIfNoNetwork():
+ if os.environ.get("BB_SKIP_NETTESTS") == "yes":
+ return unittest.skip("Network tests being skipped")
+ return lambda f: f
+
class URITest(unittest.TestCase):
test_uris = {
"http://www.google.com/index.html" : {
@@ -518,141 +523,153 @@
self.fetchUnpack(['file://a;subdir=/bin/sh'])
class FetcherNetworkTest(FetcherTest):
+ @skipIfNoNetwork()
+ def test_fetch(self):
+ fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
+ self.d.setVar("BB_NO_NETWORK", "1")
+ fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
+ fetcher.download()
+ fetcher.unpack(self.unpackdir)
+ self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
+ self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
- if os.environ.get("BB_SKIP_NETTESTS") == "yes":
- print("Unset BB_SKIP_NETTESTS to run network tests")
- else:
- def test_fetch(self):
- fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
- fetcher.download()
- self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
- self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
- self.d.setVar("BB_NO_NETWORK", "1")
- fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
- fetcher.download()
+ @skipIfNoNetwork()
+ def test_fetch_mirror(self):
+ self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ def test_fetch_mirror_of_mirror(self):
+ self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ def test_fetch_file_mirror_of_mirror(self):
+ self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+ os.mkdir(self.dldir + "/some2where")
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ def test_fetch_premirror(self):
+ self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
+ fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+ fetcher.download()
+ self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+ @skipIfNoNetwork()
+ def gitfetcher(self, url1, url2):
+ def checkrevision(self, fetcher):
fetcher.unpack(self.unpackdir)
- self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
- self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
+ revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
+ self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
- def test_fetch_mirror(self):
- self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
- fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
- fetcher.download()
- self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+ self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
+ self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
+ fetcher = bb.fetch.Fetch([url1], self.d)
+ fetcher.download()
+ checkrevision(self, fetcher)
+ # Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
+ bb.utils.prunedir(self.dldir + "/git2/")
+ bb.utils.prunedir(self.unpackdir)
+ self.d.setVar("BB_NO_NETWORK", "1")
+ fetcher = bb.fetch.Fetch([url2], self.d)
+ fetcher.download()
+ checkrevision(self, fetcher)
- def test_fetch_mirror_of_mirror(self):
- self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
- fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
- fetcher.download()
- self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+ @skipIfNoNetwork()
+ def test_gitfetch(self):
+ url1 = url2 = "git://git.openembedded.org/bitbake"
+ self.gitfetcher(url1, url2)
- def test_fetch_file_mirror_of_mirror(self):
- self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
- fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
- os.mkdir(self.dldir + "/some2where")
- fetcher.download()
- self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+ @skipIfNoNetwork()
+ def test_gitfetch_goodsrcrev(self):
+ # SRCREV is set but matches rev= parameter
+ url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
+ self.gitfetcher(url1, url2)
- def test_fetch_premirror(self):
- self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
- fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
- fetcher.download()
- self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+ @skipIfNoNetwork()
+ def test_gitfetch_badsrcrev(self):
+ # SRCREV is set but does not match rev= parameter
+ url1 = url2 = "git://git.openembedded.org/bitbake;rev=dead05b0b4ba0959fe0624d2a4885d7b70426da5"
+ self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
- def gitfetcher(self, url1, url2):
- def checkrevision(self, fetcher):
- fetcher.unpack(self.unpackdir)
- revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
- self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
+ @skipIfNoNetwork()
+ def test_gitfetch_tagandrev(self):
+ # SRCREV is set but does not match rev= parameter
+ url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;tag=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
+ self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
- self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
- self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
- fetcher = bb.fetch.Fetch([url1], self.d)
- fetcher.download()
- checkrevision(self, fetcher)
- # Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
- bb.utils.prunedir(self.dldir + "/git2/")
- bb.utils.prunedir(self.unpackdir)
- self.d.setVar("BB_NO_NETWORK", "1")
- fetcher = bb.fetch.Fetch([url2], self.d)
- fetcher.download()
- checkrevision(self, fetcher)
+ @skipIfNoNetwork()
+ def test_gitfetch_localusehead(self):
+ # Create dummy local Git repo
+ src_dir = tempfile.mkdtemp(dir=self.tempdir,
+ prefix='gitfetch_localusehead_')
+ src_dir = os.path.abspath(src_dir)
+ bb.process.run("git init", cwd=src_dir)
+ bb.process.run("git commit --allow-empty -m'Dummy commit'",
+ cwd=src_dir)
+ # Use other branch than master
+ bb.process.run("git checkout -b my-devel", cwd=src_dir)
+ bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
+ cwd=src_dir)
+ stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
+ orig_rev = stdout[0].strip()
- def test_gitfetch(self):
- url1 = url2 = "git://git.openembedded.org/bitbake"
- self.gitfetcher(url1, url2)
+ # Fetch and check revision
+ self.d.setVar("SRCREV", "AUTOINC")
+ url = "git://" + src_dir + ";protocol=file;usehead=1"
+ fetcher = bb.fetch.Fetch([url], self.d)
+ fetcher.download()
+ fetcher.unpack(self.unpackdir)
+ stdout = bb.process.run("git rev-parse HEAD",
+ cwd=os.path.join(self.unpackdir, 'git'))
+ unpack_rev = stdout[0].strip()
+ self.assertEqual(orig_rev, unpack_rev)
- def test_gitfetch_goodsrcrev(self):
- # SRCREV is set but matches rev= parameter
- url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
- self.gitfetcher(url1, url2)
+ @skipIfNoNetwork()
+ def test_gitfetch_remoteusehead(self):
+ url = "git://git.openembedded.org/bitbake;usehead=1"
+ self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
- def test_gitfetch_badsrcrev(self):
- # SRCREV is set but does not match rev= parameter
- url1 = url2 = "git://git.openembedded.org/bitbake;rev=dead05b0b4ba0959fe0624d2a4885d7b70426da5"
- self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
+ @skipIfNoNetwork()
+ def test_gitfetch_premirror(self):
+ url1 = "git://git.openembedded.org/bitbake"
+ url2 = "git://someserver.org/bitbake"
+ self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
+ self.gitfetcher(url1, url2)
- def test_gitfetch_tagandrev(self):
- # SRCREV is set but does not match rev= parameter
- url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;tag=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
- self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
+ @skipIfNoNetwork()
+ def test_gitfetch_premirror2(self):
+ url1 = url2 = "git://someserver.org/bitbake"
+ self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
+ self.gitfetcher(url1, url2)
- def test_gitfetch_localusehead(self):
- # Create dummy local Git repo
- src_dir = tempfile.mkdtemp(dir=self.tempdir,
- prefix='gitfetch_localusehead_')
- src_dir = os.path.abspath(src_dir)
- bb.process.run("git init", cwd=src_dir)
- bb.process.run("git commit --allow-empty -m'Dummy commit'",
- cwd=src_dir)
- # Use other branch than master
- bb.process.run("git checkout -b my-devel", cwd=src_dir)
- bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
- cwd=src_dir)
- stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
- orig_rev = stdout[0].strip()
+ @skipIfNoNetwork()
+ def test_gitfetch_premirror3(self):
+ realurl = "git://git.openembedded.org/bitbake"
+ dummyurl = "git://someserver.org/bitbake"
+ self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
+ os.chdir(self.tempdir)
+ bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
+ self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
+ self.gitfetcher(dummyurl, dummyurl)
- # Fetch and check revision
- self.d.setVar("SRCREV", "AUTOINC")
- url = "git://" + src_dir + ";protocol=file;usehead=1"
- fetcher = bb.fetch.Fetch([url], self.d)
- fetcher.download()
- fetcher.unpack(self.unpackdir)
- stdout = bb.process.run("git rev-parse HEAD",
- cwd=os.path.join(self.unpackdir, 'git'))
- unpack_rev = stdout[0].strip()
- self.assertEqual(orig_rev, unpack_rev)
-
- def test_gitfetch_remoteusehead(self):
- url = "git://git.openembedded.org/bitbake;usehead=1"
- self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
-
- def test_gitfetch_premirror(self):
- url1 = "git://git.openembedded.org/bitbake"
- url2 = "git://someserver.org/bitbake"
- self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
- self.gitfetcher(url1, url2)
-
- def test_gitfetch_premirror2(self):
- url1 = url2 = "git://someserver.org/bitbake"
- self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
- self.gitfetcher(url1, url2)
-
- def test_gitfetch_premirror3(self):
- realurl = "git://git.openembedded.org/bitbake"
- dummyurl = "git://someserver.org/bitbake"
- self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
- os.chdir(self.tempdir)
- bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
- self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
- self.gitfetcher(dummyurl, dummyurl)
-
- def test_git_submodule(self):
- fetcher = bb.fetch.Fetch(["gitsm://git.yoctoproject.org/git-submodule-test;rev=f12e57f2edf0aa534cf1616fa983d165a92b0842"], self.d)
- fetcher.download()
- # Previous cwd has been deleted
- os.chdir(os.path.dirname(self.unpackdir))
- fetcher.unpack(self.unpackdir)
+ @skipIfNoNetwork()
+ def test_git_submodule(self):
+ fetcher = bb.fetch.Fetch(["gitsm://git.yoctoproject.org/git-submodule-test;rev=f12e57f2edf0aa534cf1616fa983d165a92b0842"], self.d)
+ fetcher.download()
+ # Previous cwd has been deleted
+ os.chdir(os.path.dirname(self.unpackdir))
+ fetcher.unpack(self.unpackdir)
class TrustedNetworksTest(FetcherTest):
@@ -782,32 +799,32 @@
("db", "http://download.oracle.com/berkeley-db/db-5.3.21.tar.gz", "http://www.oracle.com/technetwork/products/berkeleydb/downloads/index-082944.html", "http://download.oracle.com/otn/berkeley-db/(?P<name>db-)(?P<pver>((\d+[\.\-_]*)+))\.tar\.gz")
: "6.1.19",
}
- if os.environ.get("BB_SKIP_NETTESTS") == "yes":
- print("Unset BB_SKIP_NETTESTS to run network tests")
- else:
- def test_git_latest_versionstring(self):
- for k, v in self.test_git_uris.items():
- self.d.setVar("PN", k[0])
- self.d.setVar("SRCREV", k[2])
- self.d.setVar("UPSTREAM_CHECK_GITTAGREGEX", k[3])
- ud = bb.fetch2.FetchData(k[1], self.d)
- pupver= ud.method.latest_versionstring(ud, self.d)
- verstring = pupver[0]
- self.assertTrue(verstring, msg="Could not find upstream version")
- r = bb.utils.vercmp_string(v, verstring)
- self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
- def test_wget_latest_versionstring(self):
- for k, v in self.test_wget_uris.items():
- self.d.setVar("PN", k[0])
- self.d.setVar("UPSTREAM_CHECK_URI", k[2])
- self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
- ud = bb.fetch2.FetchData(k[1], self.d)
- pupver = ud.method.latest_versionstring(ud, self.d)
- verstring = pupver[0]
- self.assertTrue(verstring, msg="Could not find upstream version")
- r = bb.utils.vercmp_string(v, verstring)
- self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
+ @skipIfNoNetwork()
+ def test_git_latest_versionstring(self):
+ for k, v in self.test_git_uris.items():
+ self.d.setVar("PN", k[0])
+ self.d.setVar("SRCREV", k[2])
+ self.d.setVar("UPSTREAM_CHECK_GITTAGREGEX", k[3])
+ ud = bb.fetch2.FetchData(k[1], self.d)
+ pupver= ud.method.latest_versionstring(ud, self.d)
+ verstring = pupver[0]
+ self.assertTrue(verstring, msg="Could not find upstream version")
+ r = bb.utils.vercmp_string(v, verstring)
+ self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
+
+ @skipIfNoNetwork()
+ def test_wget_latest_versionstring(self):
+ for k, v in self.test_wget_uris.items():
+ self.d.setVar("PN", k[0])
+ self.d.setVar("UPSTREAM_CHECK_URI", k[2])
+ self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
+ ud = bb.fetch2.FetchData(k[1], self.d)
+ pupver = ud.method.latest_versionstring(ud, self.d)
+ verstring = pupver[0]
+ self.assertTrue(verstring, msg="Could not find upstream version")
+ r = bb.utils.vercmp_string(v, verstring)
+ self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
class FetchCheckStatusTest(FetcherTest):
@@ -820,37 +837,636 @@
"https://yoctoproject.org/documentation",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
"http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
- "ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
- "ftp://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
- "ftp://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
+ "ftp://sourceware.org/pub/libffi/libffi-1.20.tar.gz",
+ "http://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
+ "https://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
+ "https://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
# GitHub releases are hosted on Amazon S3, which doesn't support HEAD
"https://github.com/kergoth/tslib/releases/download/1.1/tslib-1.1.tar.xz"
]
- if os.environ.get("BB_SKIP_NETTESTS") == "yes":
- print("Unset BB_SKIP_NETTESTS to run network tests")
- else:
-
- def test_wget_checkstatus(self):
- fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d)
- for u in self.test_wget_uris:
+ @skipIfNoNetwork()
+ def test_wget_checkstatus(self):
+ fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d)
+ for u in self.test_wget_uris:
+ with self.subTest(url=u):
ud = fetch.ud[u]
m = ud.method
ret = m.checkstatus(fetch, ud, self.d)
self.assertTrue(ret, msg="URI %s, can't check status" % (u))
+ @skipIfNoNetwork()
+ def test_wget_checkstatus_connection_cache(self):
+ from bb.fetch2 import FetchConnectionCache
- def test_wget_checkstatus_connection_cache(self):
- from bb.fetch2 import FetchConnectionCache
+ connection_cache = FetchConnectionCache()
+ fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d,
+ connection_cache = connection_cache)
- connection_cache = FetchConnectionCache()
- fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d,
- connection_cache = connection_cache)
-
- for u in self.test_wget_uris:
+ for u in self.test_wget_uris:
+ with self.subTest(url=u):
ud = fetch.ud[u]
m = ud.method
ret = m.checkstatus(fetch, ud, self.d)
self.assertTrue(ret, msg="URI %s, can't check status" % (u))
- connection_cache.close_connections()
+ connection_cache.close_connections()
+
+
+class GitMakeShallowTest(FetcherTest):
+ bitbake_dir = os.path.join(os.path.dirname(os.path.join(__file__)), '..', '..', '..')
+ make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow')
+
+ def setUp(self):
+ FetcherTest.setUp(self)
+ self.gitdir = os.path.join(self.tempdir, 'gitshallow')
+ bb.utils.mkdirhier(self.gitdir)
+ bb.process.run('git init', cwd=self.gitdir)
+
+ def assertRefs(self, expected_refs):
+ actual_refs = self.git(['for-each-ref', '--format=%(refname)']).splitlines()
+ full_expected = self.git(['rev-parse', '--symbolic-full-name'] + expected_refs).splitlines()
+ self.assertEqual(sorted(full_expected), sorted(actual_refs))
+
+ def assertRevCount(self, expected_count, args=None):
+ if args is None:
+ args = ['HEAD']
+ revs = self.git(['rev-list'] + args)
+ actual_count = len(revs.splitlines())
+ self.assertEqual(expected_count, actual_count, msg='Object count `%d` is not the expected `%d`' % (actual_count, expected_count))
+
+ def git(self, cmd):
+ if isinstance(cmd, str):
+ cmd = 'git ' + cmd
+ else:
+ cmd = ['git'] + cmd
+ return bb.process.run(cmd, cwd=self.gitdir)[0]
+
+ def make_shallow(self, args=None):
+ if args is None:
+ args = ['HEAD']
+ return bb.process.run([self.make_shallow_path] + args, cwd=self.gitdir)
+
+ def add_empty_file(self, path, msg=None):
+ if msg is None:
+ msg = path
+ open(os.path.join(self.gitdir, path), 'w').close()
+ self.git(['add', path])
+ self.git(['commit', '-m', msg, path])
+
+ def test_make_shallow_single_branch_no_merge(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2)
+ self.make_shallow()
+ self.assertRevCount(1)
+
+ def test_make_shallow_single_branch_one_merge(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('checkout -b a_branch')
+ self.add_empty_file('c')
+ self.git('checkout master')
+ self.add_empty_file('d')
+ self.git('merge --no-ff --no-edit a_branch')
+ self.git('branch -d a_branch')
+ self.add_empty_file('e')
+ self.assertRevCount(6)
+ self.make_shallow(['HEAD~2'])
+ self.assertRevCount(5)
+
+ def test_make_shallow_at_merge(self):
+ self.add_empty_file('a')
+ self.git('checkout -b a_branch')
+ self.add_empty_file('b')
+ self.git('checkout master')
+ self.git('merge --no-ff --no-edit a_branch')
+ self.git('branch -d a_branch')
+ self.assertRevCount(3)
+ self.make_shallow()
+ self.assertRevCount(1)
+
+ def test_make_shallow_annotated_tag(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('tag -a -m a_tag a_tag')
+ self.assertRevCount(2)
+ self.make_shallow(['a_tag'])
+ self.assertRevCount(1)
+
+ def test_make_shallow_multi_ref(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('checkout -b a_branch')
+ self.add_empty_file('c')
+ self.git('checkout master')
+ self.add_empty_file('d')
+ self.git('checkout -b a_branch_2')
+ self.add_empty_file('a_tag')
+ self.git('tag a_tag')
+ self.git('checkout master')
+ self.git('branch -D a_branch_2')
+ self.add_empty_file('e')
+ self.assertRevCount(6, ['--all'])
+ self.make_shallow()
+ self.assertRevCount(5, ['--all'])
+
+ def test_make_shallow_multi_ref_trim(self):
+ self.add_empty_file('a')
+ self.git('checkout -b a_branch')
+ self.add_empty_file('c')
+ self.git('checkout master')
+ self.assertRevCount(1)
+ self.assertRevCount(2, ['--all'])
+ self.assertRefs(['master', 'a_branch'])
+ self.make_shallow(['-r', 'master', 'HEAD'])
+ self.assertRevCount(1, ['--all'])
+ self.assertRefs(['master'])
+
+ def test_make_shallow_noop(self):
+ self.add_empty_file('a')
+ self.assertRevCount(1)
+ self.make_shallow()
+ self.assertRevCount(1)
+
+ @skipIfNoNetwork()
+ def test_make_shallow_bitbake(self):
+ self.git('remote add origin https://github.com/openembedded/bitbake')
+ self.git('fetch --tags origin')
+ orig_revs = len(self.git('rev-list --all').splitlines())
+ self.make_shallow(['refs/tags/1.10.0'])
+ self.assertRevCount(orig_revs - 1746, ['--all'])
+
+class GitShallowTest(FetcherTest):
+ def setUp(self):
+ FetcherTest.setUp(self)
+ self.gitdir = os.path.join(self.tempdir, 'git')
+ self.srcdir = os.path.join(self.tempdir, 'gitsource')
+
+ bb.utils.mkdirhier(self.srcdir)
+ self.git('init', cwd=self.srcdir)
+ self.d.setVar('WORKDIR', self.tempdir)
+ self.d.setVar('S', self.gitdir)
+ self.d.delVar('PREMIRRORS')
+ self.d.delVar('MIRRORS')
+
+ uri = 'git://%s;protocol=file;subdir=${S}' % self.srcdir
+ self.d.setVar('SRC_URI', uri)
+ self.d.setVar('SRCREV', '${AUTOREV}')
+ self.d.setVar('AUTOREV', '${@bb.fetch2.get_autorev(d)}')
+
+ self.d.setVar('BB_GIT_SHALLOW', '1')
+ self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '0')
+ self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
+
+ def assertRefs(self, expected_refs, cwd=None):
+ if cwd is None:
+ cwd = self.gitdir
+ actual_refs = self.git(['for-each-ref', '--format=%(refname)'], cwd=cwd).splitlines()
+ full_expected = self.git(['rev-parse', '--symbolic-full-name'] + expected_refs, cwd=cwd).splitlines()
+ self.assertEqual(sorted(set(full_expected)), sorted(set(actual_refs)))
+
+ def assertRevCount(self, expected_count, args=None, cwd=None):
+ if args is None:
+ args = ['HEAD']
+ if cwd is None:
+ cwd = self.gitdir
+ revs = self.git(['rev-list'] + args, cwd=cwd)
+ actual_count = len(revs.splitlines())
+ self.assertEqual(expected_count, actual_count, msg='Object count `%d` is not the expected `%d`' % (actual_count, expected_count))
+
+ def git(self, cmd, cwd=None):
+ if isinstance(cmd, str):
+ cmd = 'git ' + cmd
+ else:
+ cmd = ['git'] + cmd
+ if cwd is None:
+ cwd = self.gitdir
+ return bb.process.run(cmd, cwd=cwd)[0]
+
+ def add_empty_file(self, path, cwd=None, msg=None):
+ if msg is None:
+ msg = path
+ if cwd is None:
+ cwd = self.srcdir
+ open(os.path.join(cwd, path), 'w').close()
+ self.git(['add', path], cwd)
+ self.git(['commit', '-m', msg, path], cwd)
+
+ def fetch(self, uri=None):
+ if uri is None:
+ uris = self.d.getVar('SRC_URI', True).split()
+ uri = uris[0]
+ d = self.d
+ else:
+ d = self.d.createCopy()
+ d.setVar('SRC_URI', uri)
+ uri = d.expand(uri)
+ uris = [uri]
+
+ fetcher = bb.fetch2.Fetch(uris, d)
+ fetcher.download()
+ ud = fetcher.ud[uri]
+ return fetcher, ud
+
+ def fetch_and_unpack(self, uri=None):
+ fetcher, ud = self.fetch(uri)
+ fetcher.unpack(self.d.getVar('WORKDIR'))
+ assert os.path.exists(self.d.getVar('S'))
+ return fetcher, ud
+
+ def fetch_shallow(self, uri=None, disabled=False, keepclone=False):
+ """Fetch a uri, generating a shallow tarball, then unpack using it"""
+ fetcher, ud = self.fetch_and_unpack(uri)
+ assert os.path.exists(ud.clonedir), 'Git clone in DLDIR (%s) does not exist for uri %s' % (ud.clonedir, uri)
+
+ # Confirm that the unpacked repo is unshallow
+ if not disabled:
+ assert os.path.exists(os.path.join(self.dldir, ud.mirrortarballs[0]))
+
+ # fetch and unpack, from the shallow tarball
+ bb.utils.remove(self.gitdir, recurse=True)
+ bb.utils.remove(ud.clonedir, recurse=True)
+
+ # confirm that the unpacked repo is used when no git clone or git
+ # mirror tarball is available
+ fetcher, ud = self.fetch_and_unpack(uri)
+ if not disabled:
+ assert os.path.exists(os.path.join(self.gitdir, '.git', 'shallow')), 'Unpacked git repository at %s is not shallow' % self.gitdir
+ else:
+ assert not os.path.exists(os.path.join(self.gitdir, '.git', 'shallow')), 'Unpacked git repository at %s is shallow' % self.gitdir
+ return fetcher, ud
+
+ def test_shallow_disabled(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW', '0')
+ self.fetch_shallow(disabled=True)
+ self.assertRevCount(2)
+
+ def test_shallow_nobranch(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ srcrev = self.git('rev-parse HEAD', cwd=self.srcdir).strip()
+ self.d.setVar('SRCREV', srcrev)
+ uri = self.d.getVar('SRC_URI', True).split()[0]
+ uri = '%s;nobranch=1;bare=1' % uri
+
+ self.fetch_shallow(uri)
+ self.assertRevCount(1)
+
+ # shallow refs are used to ensure the srcrev sticks around when we
+ # have no other branches referencing it
+ self.assertRefs(['refs/shallow/default'])
+
+ def test_shallow_default_depth_1(self):
+ # Create initial git repo
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.fetch_shallow()
+ self.assertRevCount(1)
+
+ def test_shallow_depth_0_disables(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ self.fetch_shallow(disabled=True)
+ self.assertRevCount(2)
+
+ def test_shallow_depth_default_override(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '2')
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH_default', '1')
+ self.fetch_shallow()
+ self.assertRevCount(1)
+
+ def test_shallow_depth_default_override_disable(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.add_empty_file('c')
+ self.assertRevCount(3, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH_default', '2')
+ self.fetch_shallow()
+ self.assertRevCount(2)
+
+ def test_current_shallow_out_of_date_clone(self):
+ # Create initial git repo
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.add_empty_file('c')
+ self.assertRevCount(3, cwd=self.srcdir)
+
+ # Clone and generate mirror tarball
+ fetcher, ud = self.fetch()
+
+ # Ensure we have a current mirror tarball, but an out of date clone
+ self.git('update-ref refs/heads/master refs/heads/master~1', cwd=ud.clonedir)
+ self.assertRevCount(2, cwd=ud.clonedir)
+
+ # Fetch and unpack, from the current tarball, not the out of date clone
+ bb.utils.remove(self.gitdir, recurse=True)
+ fetcher, ud = self.fetch()
+ fetcher.unpack(self.d.getVar('WORKDIR'))
+ self.assertRevCount(1)
+
+ def test_shallow_single_branch_no_merge(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.fetch_shallow()
+ self.assertRevCount(1)
+ assert os.path.exists(os.path.join(self.gitdir, 'a'))
+ assert os.path.exists(os.path.join(self.gitdir, 'b'))
+
+ def test_shallow_no_dangling(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.fetch_shallow()
+ self.assertRevCount(1)
+ assert not self.git('fsck --dangling')
+
+ def test_shallow_srcrev_branch_truncation(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ b_commit = self.git('rev-parse HEAD', cwd=self.srcdir).rstrip()
+ self.add_empty_file('c')
+ self.assertRevCount(3, cwd=self.srcdir)
+
+ self.d.setVar('SRCREV', b_commit)
+ self.fetch_shallow()
+
+ # The 'c' commit was removed entirely, and 'a' was removed from history
+ self.assertRevCount(1, ['--all'])
+ self.assertEqual(self.git('rev-parse HEAD').strip(), b_commit)
+ assert os.path.exists(os.path.join(self.gitdir, 'a'))
+ assert os.path.exists(os.path.join(self.gitdir, 'b'))
+ assert not os.path.exists(os.path.join(self.gitdir, 'c'))
+
+ def test_shallow_ref_pruning(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('branch a_branch', cwd=self.srcdir)
+ self.assertRefs(['master', 'a_branch'], cwd=self.srcdir)
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.fetch_shallow()
+
+ self.assertRefs(['master', 'origin/master'])
+ self.assertRevCount(1)
+
+ def test_shallow_submodules(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ smdir = os.path.join(self.tempdir, 'gitsubmodule')
+ bb.utils.mkdirhier(smdir)
+ self.git('init', cwd=smdir)
+ self.add_empty_file('asub', cwd=smdir)
+
+ self.git('submodule init', cwd=self.srcdir)
+ self.git('submodule add file://%s' % smdir, cwd=self.srcdir)
+ self.git('submodule update', cwd=self.srcdir)
+ self.git('commit -m submodule -a', cwd=self.srcdir)
+
+ uri = 'gitsm://%s;protocol=file;subdir=${S}' % self.srcdir
+ fetcher, ud = self.fetch_shallow(uri)
+
+ self.assertRevCount(1)
+ assert './.git/modules/' in bb.process.run('tar -tzf %s' % os.path.join(self.dldir, ud.mirrortarballs[0]))[0]
+ assert os.listdir(os.path.join(self.gitdir, 'gitsubmodule'))
+
+ if any(os.path.exists(os.path.join(p, 'git-annex')) for p in os.environ.get('PATH').split(':')):
+ def test_shallow_annex(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('annex init', cwd=self.srcdir)
+ open(os.path.join(self.srcdir, 'c'), 'w').close()
+ self.git('annex add c', cwd=self.srcdir)
+ self.git('commit -m annex-c -a', cwd=self.srcdir)
+ bb.process.run('chmod u+w -R %s' % os.path.join(self.srcdir, '.git', 'annex'))
+
+ uri = 'gitannex://%s;protocol=file;subdir=${S}' % self.srcdir
+ fetcher, ud = self.fetch_shallow(uri)
+
+ self.assertRevCount(1)
+ assert './.git/annex/' in bb.process.run('tar -tzf %s' % os.path.join(self.dldir, ud.mirrortarballs[0]))[0]
+ assert os.path.exists(os.path.join(self.gitdir, 'c'))
+
+ def test_shallow_multi_one_uri(self):
+ # Create initial git repo
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('checkout -b a_branch', cwd=self.srcdir)
+ self.add_empty_file('c')
+ self.add_empty_file('d')
+ self.git('checkout master', cwd=self.srcdir)
+ self.git('tag v0.0 a_branch', cwd=self.srcdir)
+ self.add_empty_file('e')
+ self.git('merge --no-ff --no-edit a_branch', cwd=self.srcdir)
+ self.add_empty_file('f')
+ self.assertRevCount(7, cwd=self.srcdir)
+
+ uri = self.d.getVar('SRC_URI', True).split()[0]
+ uri = '%s;branch=master,a_branch;name=master,a_branch' % uri
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ self.d.setVar('BB_GIT_SHALLOW_REVS', 'v0.0')
+ self.d.setVar('SRCREV_master', '${AUTOREV}')
+ self.d.setVar('SRCREV_a_branch', '${AUTOREV}')
+
+ self.fetch_shallow(uri)
+
+ self.assertRevCount(5)
+ self.assertRefs(['master', 'origin/master', 'origin/a_branch'])
+
+ def test_shallow_multi_one_uri_depths(self):
+ # Create initial git repo
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('checkout -b a_branch', cwd=self.srcdir)
+ self.add_empty_file('c')
+ self.add_empty_file('d')
+ self.git('checkout master', cwd=self.srcdir)
+ self.add_empty_file('e')
+ self.git('merge --no-ff --no-edit a_branch', cwd=self.srcdir)
+ self.add_empty_file('f')
+ self.assertRevCount(7, cwd=self.srcdir)
+
+ uri = self.d.getVar('SRC_URI', True).split()[0]
+ uri = '%s;branch=master,a_branch;name=master,a_branch' % uri
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH_master', '3')
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH_a_branch', '1')
+ self.d.setVar('SRCREV_master', '${AUTOREV}')
+ self.d.setVar('SRCREV_a_branch', '${AUTOREV}')
+
+ self.fetch_shallow(uri)
+
+ self.assertRevCount(4, ['--all'])
+ self.assertRefs(['master', 'origin/master', 'origin/a_branch'])
+
+ def test_shallow_clone_preferred_over_shallow(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ # Fetch once to generate the shallow tarball
+ fetcher, ud = self.fetch()
+ assert os.path.exists(os.path.join(self.dldir, ud.mirrortarballs[0]))
+
+ # Fetch and unpack with both the clonedir and shallow tarball available
+ bb.utils.remove(self.gitdir, recurse=True)
+ fetcher, ud = self.fetch_and_unpack()
+
+ # The unpacked tree should *not* be shallow
+ self.assertRevCount(2)
+ assert not os.path.exists(os.path.join(self.gitdir, '.git', 'shallow'))
+
+ def test_shallow_mirrors(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ # Fetch once to generate the shallow tarball
+ fetcher, ud = self.fetch()
+ mirrortarball = ud.mirrortarballs[0]
+ assert os.path.exists(os.path.join(self.dldir, mirrortarball))
+
+ # Set up the mirror
+ mirrordir = os.path.join(self.tempdir, 'mirror')
+ bb.utils.mkdirhier(mirrordir)
+ self.d.setVar('PREMIRRORS', 'git://.*/.* file://%s/\n' % mirrordir)
+
+ os.rename(os.path.join(self.dldir, mirrortarball),
+ os.path.join(mirrordir, mirrortarball))
+
+ # Fetch from the mirror
+ bb.utils.remove(self.dldir, recurse=True)
+ bb.utils.remove(self.gitdir, recurse=True)
+ self.fetch_and_unpack()
+ self.assertRevCount(1)
+
+ def test_shallow_invalid_depth(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '-12')
+ with self.assertRaises(bb.fetch2.FetchError):
+ self.fetch()
+
+ def test_shallow_invalid_depth_default(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH_default', '-12')
+ with self.assertRaises(bb.fetch2.FetchError):
+ self.fetch()
+
+ def test_shallow_extra_refs(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('branch a_branch', cwd=self.srcdir)
+ self.assertRefs(['master', 'a_branch'], cwd=self.srcdir)
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/heads/a_branch')
+ self.fetch_shallow()
+
+ self.assertRefs(['master', 'origin/master', 'origin/a_branch'])
+ self.assertRevCount(1)
+
+ def test_shallow_extra_refs_wildcard(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('branch a_branch', cwd=self.srcdir)
+ self.git('tag v1.0', cwd=self.srcdir)
+ self.assertRefs(['master', 'a_branch', 'v1.0'], cwd=self.srcdir)
+ self.assertRevCount(2, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/tags/*')
+ self.fetch_shallow()
+
+ self.assertRefs(['master', 'origin/master', 'v1.0'])
+ self.assertRevCount(1)
+
+ def test_shallow_missing_extra_refs(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/heads/foo')
+ with self.assertRaises(bb.fetch2.FetchError):
+ self.fetch()
+
+ def test_shallow_missing_extra_refs_wildcard(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/tags/*')
+ self.fetch()
+
+ def test_shallow_remove_revs(self):
+ # Create initial git repo
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+ self.git('checkout -b a_branch', cwd=self.srcdir)
+ self.add_empty_file('c')
+ self.add_empty_file('d')
+ self.git('checkout master', cwd=self.srcdir)
+ self.git('tag v0.0 a_branch', cwd=self.srcdir)
+ self.add_empty_file('e')
+ self.git('merge --no-ff --no-edit a_branch', cwd=self.srcdir)
+ self.git('branch -d a_branch', cwd=self.srcdir)
+ self.add_empty_file('f')
+ self.assertRevCount(7, cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ self.d.setVar('BB_GIT_SHALLOW_REVS', 'v0.0')
+
+ self.fetch_shallow()
+
+ self.assertRevCount(5)
+
+ def test_shallow_invalid_revs(self):
+ self.add_empty_file('a')
+ self.add_empty_file('b')
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ self.d.setVar('BB_GIT_SHALLOW_REVS', 'v0.0')
+
+ with self.assertRaises(bb.fetch2.FetchError):
+ self.fetch()
+
+ @skipIfNoNetwork()
+ def test_bitbake(self):
+ self.git('remote add --mirror=fetch origin git://github.com/openembedded/bitbake', cwd=self.srcdir)
+ self.git('config core.bare true', cwd=self.srcdir)
+ self.git('fetch', cwd=self.srcdir)
+
+ self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+ # Note that the 1.10.0 tag is annotated, so this also tests
+ # reference of an annotated vs unannotated tag
+ self.d.setVar('BB_GIT_SHALLOW_REVS', '1.10.0')
+
+ self.fetch_shallow()
+
+ # Confirm that the history of 1.10.0 was removed
+ orig_revs = len(self.git('rev-list master', cwd=self.srcdir).splitlines())
+ revs = len(self.git('rev-list master').splitlines())
+ self.assertNotEqual(orig_revs, revs)
+ self.assertRefs(['master', 'origin/master'])
+ self.assertRevCount(orig_revs - 1758)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py b/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py
index ab6ca90..8f16ba4 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py
@@ -83,7 +83,28 @@
self.assertEqual(d.getVar("A"), None)
self.assertEqual(d.getVarFlag("A","flag"), None)
self.assertEqual(d.getVar("B"), "2")
-
+
+ exporttest = """
+A = "a"
+export B = "b"
+export C
+exportD = "d"
+"""
+
+ def test_parse_exports(self):
+ f = self.parsehelper(self.exporttest)
+ d = bb.parse.handle(f.name, self.d)['']
+ self.assertEqual(d.getVar("A"), "a")
+ self.assertIsNone(d.getVarFlag("A", "export"))
+ self.assertEqual(d.getVar("B"), "b")
+ self.assertEqual(d.getVarFlag("B", "export"), 1)
+ self.assertIsNone(d.getVar("C"))
+ self.assertEqual(d.getVarFlag("C", "export"), 1)
+ self.assertIsNone(d.getVar("D"))
+ self.assertIsNone(d.getVarFlag("D", "export"))
+ self.assertEqual(d.getVar("exportD"), "d")
+ self.assertIsNone(d.getVarFlag("exportD", "export"))
+
overridetest = """
RRECOMMENDS_${PN} = "a"
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py b/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py
index 928333a..fa95f63 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py
@@ -2,6 +2,7 @@
#
# Copyright (C) 2012-2017 Intel Corporation
# Copyright (C) 2011 Mentor Graphics Corporation
+# Copyright (C) 2006-2012 Richard Purdie
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -54,6 +55,7 @@
"""Exception raised when run_command fails"""
class TinfoilDataStoreConnector:
+ """Connector object used to enable access to datastore objects via tinfoil"""
def __init__(self, tinfoil, dsindex):
self.tinfoil = tinfoil
@@ -172,6 +174,14 @@
attrvalue = self.tinfoil.run_command('getBbFilePriority') or {}
elif name == 'pkg_dp':
attrvalue = self.tinfoil.run_command('getDefaultPreference') or {}
+ elif name == 'fn_provides':
+ attrvalue = self.tinfoil.run_command('getRecipeProvides') or {}
+ elif name == 'packages':
+ attrvalue = self.tinfoil.run_command('getRecipePackages') or {}
+ elif name == 'packages_dynamic':
+ attrvalue = self.tinfoil.run_command('getRecipePackagesDynamic') or {}
+ elif name == 'rproviders':
+ attrvalue = self.tinfoil.run_command('getRProviders') or {}
else:
raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
@@ -208,19 +218,119 @@
return self.tinfoil.find_best_provider(pn)
+class TinfoilRecipeInfo:
+ """
+ Provides a convenient representation of the cached information for a single recipe.
+ Some attributes are set on construction, others are read on-demand (which internally
+ may result in a remote procedure call to the bitbake server the first time).
+ Note that only information which is cached is available through this object - if
+ you need other variable values you will need to parse the recipe using
+ Tinfoil.parse_recipe().
+ """
+ def __init__(self, recipecache, d, pn, fn, fns):
+ self._recipecache = recipecache
+ self._d = d
+ self.pn = pn
+ self.fn = fn
+ self.fns = fns
+ self.inherit_files = recipecache.inherits[fn]
+ self.depends = recipecache.deps[fn]
+ (self.pe, self.pv, self.pr) = recipecache.pkg_pepvpr[fn]
+ self._cached_packages = None
+ self._cached_rprovides = None
+ self._cached_packages_dynamic = None
+
+ def __getattr__(self, name):
+ if name == 'alternates':
+ return [x for x in self.fns if x != self.fn]
+ elif name == 'rdepends':
+ return self._recipecache.rundeps[self.fn]
+ elif name == 'rrecommends':
+ return self._recipecache.runrecs[self.fn]
+ elif name == 'provides':
+ return self._recipecache.fn_provides[self.fn]
+ elif name == 'packages':
+ if self._cached_packages is None:
+ self._cached_packages = []
+ for pkg, fns in self._recipecache.packages.items():
+ if self.fn in fns:
+ self._cached_packages.append(pkg)
+ return self._cached_packages
+ elif name == 'packages_dynamic':
+ if self._cached_packages_dynamic is None:
+ self._cached_packages_dynamic = []
+ for pkg, fns in self._recipecache.packages_dynamic.items():
+ if self.fn in fns:
+ self._cached_packages_dynamic.append(pkg)
+ return self._cached_packages_dynamic
+ elif name == 'rprovides':
+ if self._cached_rprovides is None:
+ self._cached_rprovides = []
+ for pkg, fns in self._recipecache.rproviders.items():
+ if self.fn in fns:
+ self._cached_rprovides.append(pkg)
+ return self._cached_rprovides
+ else:
+ raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
+ def inherits(self, only_recipe=False):
+ """
+ Get the inherited classes for a recipe. Returns the class names only.
+ Parameters:
+ only_recipe: True to return only the classes inherited by the recipe
+ itself, False to return all classes inherited within
+ the context for the recipe (which includes globally
+ inherited classes).
+ """
+ if only_recipe:
+ global_inherit = [x for x in (self._d.getVar('BBINCLUDED') or '').split() if x.endswith('.bbclass')]
+ else:
+ global_inherit = []
+ for clsfile in self.inherit_files:
+ if only_recipe and clsfile in global_inherit:
+ continue
+ clsname = os.path.splitext(os.path.basename(clsfile))[0]
+ yield clsname
+ def __str__(self):
+ return '%s' % self.pn
+
+
class Tinfoil:
+ """
+ Tinfoil - an API for scripts and utilities to query
+ BitBake internals and perform build operations.
+ """
def __init__(self, output=sys.stdout, tracking=False, setup_logging=True):
+ """
+ Create a new tinfoil object.
+ Parameters:
+ output: specifies where console output should be sent. Defaults
+ to sys.stdout.
+ tracking: True to enable variable history tracking, False to
+ disable it (default). Enabling this has a minor
+ performance impact so typically it isn't enabled
+ unless you need to query variable history.
+ setup_logging: True to setup a logger so that things like
+ bb.warn() will work immediately and timeout warnings
+ are visible; False to let BitBake do this itself.
+ """
self.logger = logging.getLogger('BitBake')
self.config_data = None
self.cooker = None
self.tracking = tracking
self.ui_module = None
self.server_connection = None
+ self.recipes_parsed = False
+ self.quiet = 0
+ self.oldhandlers = self.logger.handlers[:]
if setup_logging:
# This is the *client-side* logger, nothing to do with
# logging messages from the server
bb.msg.logger_create('BitBake', output)
+ self.localhandlers = []
+ for handler in self.logger.handlers:
+ if handler not in self.oldhandlers:
+ self.localhandlers.append(handler)
def __enter__(self):
return self
@@ -228,19 +338,61 @@
def __exit__(self, type, value, traceback):
self.shutdown()
- def prepare(self, config_only=False, config_params=None, quiet=0):
+ def prepare(self, config_only=False, config_params=None, quiet=0, extra_features=None):
+ """
+ Prepares the underlying BitBake system to be used via tinfoil.
+ This function must be called prior to calling any of the other
+ functions in the API.
+ NOTE: if you call prepare() you must absolutely call shutdown()
+ before your code terminates. You can use a "with" block to ensure
+ this happens e.g.
+
+ with bb.tinfoil.Tinfoil() as tinfoil:
+ tinfoil.prepare()
+ ...
+
+ Parameters:
+ config_only: True to read only the configuration and not load
+ the cache / parse recipes. This is useful if you just
+ want to query the value of a variable at the global
+ level or you want to do anything else that doesn't
+ involve knowing anything about the recipes in the
+ current configuration. False loads the cache / parses
+ recipes.
+ config_params: optionally specify your own configuration
+ parameters. If not specified an instance of
+ TinfoilConfigParameters will be created internally.
+ quiet: quiet level controlling console output - equivalent
+ to bitbake's -q/--quiet option. Default of 0 gives
+ the same output level as normal bitbake execution.
+ extra_features: extra features to be added to the feature
+ set requested from the server. See
+ CookerFeatures._feature_list for possible
+ features.
+ """
+ self.quiet = quiet
+
if self.tracking:
extrafeatures = [bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
else:
extrafeatures = []
+ if extra_features:
+ extrafeatures += extra_features
+
if not config_params:
config_params = TinfoilConfigParameters(config_only=config_only, quiet=quiet)
cookerconfig = CookerConfiguration()
cookerconfig.setConfigParameters(config_params)
- server, self.server_connection, ui_module = setup_bitbake(config_params,
+ if not config_only:
+ # Disable local loggers because the UI module is going to set up its own
+ for handler in self.localhandlers:
+ self.logger.handlers.remove(handler)
+ self.localhandlers = []
+
+ self.server_connection, ui_module = setup_bitbake(config_params,
cookerconfig,
extrafeatures)
@@ -266,6 +418,7 @@
self.run_command('parseConfiguration')
else:
self.run_actions(config_params)
+ self.recipes_parsed = True
self.config_data = bb.data.init()
connector = TinfoilDataStoreConnector(self, None)
@@ -285,7 +438,13 @@
def parseRecipes(self):
"""
- Force a parse of all recipes. Normally you should specify
+ Legacy function - use parse_recipes() instead.
+ """
+ self.parse_recipes()
+
+ def parse_recipes(self):
+ """
+ Load information on all recipes. Normally you should specify
config_only=False when calling prepare() instead of using this
function; this function is designed for situations where you need
to initialise Tinfoil and use it with config_only=True first and
@@ -293,6 +452,7 @@
"""
config_params = TinfoilConfigParameters(config_only=False)
self.run_actions(config_params)
+ self.recipes_parsed = True
def run_command(self, command, *params):
"""
@@ -339,9 +499,16 @@
return self.server_connection.events.waitEvent(timeout)
def get_overlayed_recipes(self):
+ """
+ Find recipes which are overlayed (i.e. where recipes exist in multiple layers)
+ """
return defaultdict(list, self.run_command('getOverlayedRecipes'))
def get_skipped_recipes(self):
+ """
+ Find recipes which were skipped (i.e. SkipRecipe was raised
+ during parsing).
+ """
return OrderedDict(self.run_command('getSkippedRecipes'))
def get_all_providers(self):
@@ -374,8 +541,77 @@
return best[3]
def get_file_appends(self, fn):
+ """
+ Find the bbappends for a recipe file
+ """
return self.run_command('getFileAppends', fn)
+ def all_recipes(self, mc='', sort=True):
+ """
+ Enable iterating over all recipes in the current configuration.
+ Returns an iterator over TinfoilRecipeInfo objects created on demand.
+ Parameters:
+ mc: The multiconfig, default of '' uses the main configuration.
+ sort: True to sort recipes alphabetically (default), False otherwise
+ """
+ recipecache = self.cooker.recipecaches[mc]
+ if sort:
+ recipes = sorted(recipecache.pkg_pn.items())
+ else:
+ recipes = recipecache.pkg_pn.items()
+ for pn, fns in recipes:
+ prov = self.find_best_provider(pn)
+ recipe = TinfoilRecipeInfo(recipecache,
+ self.config_data,
+ pn=pn,
+ fn=prov[3],
+ fns=fns)
+ yield recipe
+
+ def all_recipe_files(self, mc='', variants=True, preferred_only=False):
+ """
+ Enable iterating over all recipe files in the current configuration.
+ Returns an iterator over file paths.
+ Parameters:
+ mc: The multiconfig, default of '' uses the main configuration.
+ variants: True to include variants of recipes created through
+ BBCLASSEXTEND (default) or False to exclude them
+ preferred_only: True to include only the preferred recipe where
+ multiple exist providing the same PN, False to list
+ all recipes
+ """
+ recipecache = self.cooker.recipecaches[mc]
+ if preferred_only:
+ files = []
+ for pn in recipecache.pkg_pn.keys():
+ prov = self.find_best_provider(pn)
+ files.append(prov[3])
+ else:
+ files = recipecache.pkg_fn.keys()
+ for fn in sorted(files):
+ if not variants and fn.startswith('virtual:'):
+ continue
+ yield fn
+
+
+ def get_recipe_info(self, pn, mc=''):
+ """
+ Get information on a specific recipe in the current configuration by name (PN).
+ Returns a TinfoilRecipeInfo object created on demand.
+ Parameters:
+ mc: The multiconfig, default of '' uses the main configuration.
+ """
+ recipecache = self.cooker.recipecaches[mc]
+ prov = self.find_best_provider(pn)
+ fn = prov[3]
+ actual_pn = recipecache.pkg_fn[fn]
+ recipe = TinfoilRecipeInfo(recipecache,
+ self.config_data,
+ pn=actual_pn,
+ fn=fn,
+ fns=recipecache.pkg_pn[actual_pn])
+ return recipe
+
def parse_recipe(self, pn):
"""
Parse the specified recipe and return a datastore object
@@ -399,26 +635,199 @@
specify config_data then you cannot use a virtual
specification for fn.
"""
- if appends and appendlist == []:
- appends = False
- if config_data:
- dctr = bb.remotedata.RemoteDatastores.transmit_datastore(config_data)
- dscon = self.run_command('parseRecipeFile', fn, appends, appendlist, dctr)
- else:
- dscon = self.run_command('parseRecipeFile', fn, appends, appendlist)
- if dscon:
- return self._reconvert_type(dscon, 'DataStoreConnectionHandle')
- else:
- return None
+ if self.tracking:
+ # Enable history tracking just for the parse operation
+ self.run_command('enableDataTracking')
+ try:
+ if appends and appendlist == []:
+ appends = False
+ if config_data:
+ dctr = bb.remotedata.RemoteDatastores.transmit_datastore(config_data)
+ dscon = self.run_command('parseRecipeFile', fn, appends, appendlist, dctr)
+ else:
+ dscon = self.run_command('parseRecipeFile', fn, appends, appendlist)
+ if dscon:
+ return self._reconvert_type(dscon, 'DataStoreConnectionHandle')
+ else:
+ return None
+ finally:
+ if self.tracking:
+ self.run_command('disableDataTracking')
- def build_file(self, buildfile, task):
+ def build_file(self, buildfile, task, internal=True):
"""
Runs the specified task for just a single recipe (i.e. no dependencies).
- This is equivalent to bitbake -b, except no warning will be printed.
+ This is equivalent to bitbake -b, except with the default internal=True
+ no warning about dependencies will be produced, normal info messages
+ from the runqueue will be silenced and BuildInit, BuildStarted and
+ BuildCompleted events will not be fired.
"""
- return self.run_command('buildFile', buildfile, task, True)
+ return self.run_command('buildFile', buildfile, task, internal)
+
+ def build_targets(self, targets, task=None, handle_events=True, extra_events=None, event_callback=None):
+ """
+ Builds the specified targets. This is equivalent to a normal invocation
+ of bitbake. Has built-in event handling which is enabled by default and
+ can be extended if needed.
+ Parameters:
+ targets:
+ One or more targets to build. Can be a list or a
+ space-separated string.
+ task:
+ The task to run; if None then the value of BB_DEFAULT_TASK
+ will be used. Default None.
+ handle_events:
+ True to handle events in a similar way to normal bitbake
+ invocation with knotty; False to return immediately (on the
+ assumption that the caller will handle the events instead).
+ Default True.
+ extra_events:
+ An optional list of events to add to the event mask (if
+ handle_events=True). If you add events here you also need
+ to specify a callback function in event_callback that will
+ handle the additional events. Default None.
+ event_callback:
+ An optional function taking a single parameter which
+ will be called first upon receiving any event (if
+ handle_events=True) so that the caller can override or
+ extend the event handling. Default None.
+ """
+ if isinstance(targets, str):
+ targets = targets.split()
+ if not task:
+ task = self.config_data.getVar('BB_DEFAULT_TASK')
+
+ if handle_events:
+ # A reasonable set of default events matching up with those we handle below
+ eventmask = [
+ 'bb.event.BuildStarted',
+ 'bb.event.BuildCompleted',
+ 'logging.LogRecord',
+ 'bb.event.NoProvider',
+ 'bb.command.CommandCompleted',
+ 'bb.command.CommandFailed',
+ 'bb.build.TaskStarted',
+ 'bb.build.TaskFailed',
+ 'bb.build.TaskSucceeded',
+ 'bb.build.TaskFailedSilent',
+ 'bb.build.TaskProgress',
+ 'bb.runqueue.runQueueTaskStarted',
+ 'bb.runqueue.sceneQueueTaskStarted',
+ 'bb.event.ProcessStarted',
+ 'bb.event.ProcessProgress',
+ 'bb.event.ProcessFinished',
+ ]
+ if extra_events:
+ eventmask.extend(extra_events)
+ ret = self.set_event_mask(eventmask)
+
+ includelogs = self.config_data.getVar('BBINCLUDELOGS')
+ loglines = self.config_data.getVar('BBINCLUDELOGS_LINES')
+
+ ret = self.run_command('buildTargets', targets, task)
+ if handle_events:
+ result = False
+ # Borrowed from knotty, instead somewhat hackily we use the helper
+ # as the object to store "shutdown" on
+ helper = bb.ui.uihelper.BBUIHelper()
+ # We set up logging optionally in the constructor so now we need to
+ # grab the handlers to pass to TerminalFilter
+ console = None
+ errconsole = None
+ for handler in self.logger.handlers:
+ if isinstance(handler, logging.StreamHandler):
+ if handler.stream == sys.stdout:
+ console = handler
+ elif handler.stream == sys.stderr:
+ errconsole = handler
+ format_str = "%(levelname)s: %(message)s"
+ format = bb.msg.BBLogFormatter(format_str)
+ helper.shutdown = 0
+ parseprogress = None
+ termfilter = bb.ui.knotty.TerminalFilter(helper, helper, console, errconsole, format, quiet=self.quiet)
+ try:
+ while True:
+ try:
+ event = self.wait_event(0.25)
+ if event:
+ if event_callback and event_callback(event):
+ continue
+ if helper.eventHandler(event):
+ if isinstance(event, bb.build.TaskFailedSilent):
+ logger.warning("Logfile for failed setscene task is %s" % event.logfile)
+ elif isinstance(event, bb.build.TaskFailed):
+ bb.ui.knotty.print_event_log(event, includelogs, loglines, termfilter)
+ continue
+ if isinstance(event, bb.event.ProcessStarted):
+ if self.quiet > 1:
+ continue
+ parseprogress = bb.ui.knotty.new_progress(event.processname, event.total)
+ parseprogress.start(False)
+ continue
+ if isinstance(event, bb.event.ProcessProgress):
+ if self.quiet > 1:
+ continue
+ if parseprogress:
+ parseprogress.update(event.progress)
+ else:
+ bb.warn("Got ProcessProgress event for someting that never started?")
+ continue
+ if isinstance(event, bb.event.ProcessFinished):
+ if self.quiet > 1:
+ continue
+ if parseprogress:
+ parseprogress.finish()
+ parseprogress = None
+ continue
+ if isinstance(event, bb.command.CommandCompleted):
+ result = True
+ break
+ if isinstance(event, bb.command.CommandFailed):
+ self.logger.error(str(event))
+ result = False
+ break
+ if isinstance(event, logging.LogRecord):
+ if event.taskpid == 0 or event.levelno > logging.INFO:
+ self.logger.handle(event)
+ continue
+ if isinstance(event, bb.event.NoProvider):
+ self.logger.error(str(event))
+ result = False
+ break
+
+ elif helper.shutdown > 1:
+ break
+ termfilter.updateFooter()
+ except KeyboardInterrupt:
+ termfilter.clearFooter()
+ if helper.shutdown == 1:
+ print("\nSecond Keyboard Interrupt, stopping...\n")
+ ret = self.run_command("stateForceShutdown")
+ if ret and ret[2]:
+ self.logger.error("Unable to cleanly stop: %s" % ret[2])
+ elif helper.shutdown == 0:
+ print("\nKeyboard Interrupt, closing down...\n")
+ interrupted = True
+ ret = self.run_command("stateShutdown")
+ if ret and ret[2]:
+ self.logger.error("Unable to cleanly shutdown: %s" % ret[2])
+ helper.shutdown = helper.shutdown + 1
+ termfilter.clearFooter()
+ finally:
+ termfilter.finish()
+ if helper.failed_tasks:
+ result = False
+ return result
+ else:
+ return ret
def shutdown(self):
+ """
+ Shut down tinfoil. Disconnects from the server and gracefully
+ releases any associated resources. You must call this function if
+ prepare() has been called, or use a with... block when you create
+ the tinfoil object which will ensure that it gets called.
+ """
if self.server_connection:
self.run_command('clientComplete')
_server_connections.remove(self.server_connection)
@@ -426,6 +835,12 @@
self.server_connection.terminate()
self.server_connection = None
+ # Restore logging handlers to how it looked when we started
+ if self.oldhandlers:
+ for handler in self.logger.handlers:
+ if handler not in self.oldhandlers:
+ self.logger.handlers.remove(handler)
+
def _reconvert_type(self, obj, origtypename):
"""
Convert an object back to the right type, in the case
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py
index e451c63..524a5b0 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py
@@ -719,7 +719,11 @@
def save_build_package_information(self, build_obj, package_info, recipes,
built_package):
- # assert isinstance(build_obj, Build)
+ # assert isinstance(build_obj, Build)
+
+ if not 'PN' in package_info.keys():
+ # no package data to save (e.g. 'OPKGN'="lib64-*"|"lib32-*")
+ return None
# create and save the object
pname = package_info['PKG']
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py
index 82aa7c4..fa88e6c 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py
@@ -207,8 +207,10 @@
self.interactive = False
bb.note("Unable to use interactive mode for this terminal, using fallback")
return
- console.addFilter(InteractConsoleLogFilter(self, format))
- errconsole.addFilter(InteractConsoleLogFilter(self, format))
+ if console:
+ console.addFilter(InteractConsoleLogFilter(self, format))
+ if errconsole:
+ errconsole.addFilter(InteractConsoleLogFilter(self, format))
self.main_progress = None
@@ -310,6 +312,32 @@
fd = sys.stdin.fileno()
self.termios.tcsetattr(fd, self.termios.TCSADRAIN, self.stdinbackup)
+def print_event_log(event, includelogs, loglines, termfilter):
+ # FIXME refactor this out further
+ logfile = event.logfile
+ if logfile and os.path.exists(logfile):
+ termfilter.clearFooter()
+ bb.error("Logfile of failure stored in: %s" % logfile)
+ if includelogs and not event.errprinted:
+ print("Log data follows:")
+ f = open(logfile, "r")
+ lines = []
+ while True:
+ l = f.readline()
+ if l == '':
+ break
+ l = l.rstrip()
+ if loglines:
+ lines.append(' | %s' % l)
+ if len(lines) > int(loglines):
+ lines.pop(0)
+ else:
+ print('| %s' % l)
+ f.close()
+ if lines:
+ for line in lines:
+ print(line)
+
def _log_settings_from_server(server, observe_only):
# Get values of variables which control our output
includelogs, error = server.runCommand(["getVariable", "BBINCLUDELOGS"])
@@ -342,6 +370,9 @@
def main(server, eventHandler, params, tf = TerminalFilter):
+ if not params.observe_only:
+ params.updateToServer(server, os.environ.copy())
+
includelogs, loglines, consolelogfile = _log_settings_from_server(server, params.observe_only)
if sys.stdin.isatty() and sys.stdout.isatty():
@@ -365,8 +396,9 @@
bb.msg.addDefaultlogFilter(errconsole, bb.msg.BBLogFilterStdErr)
console.setFormatter(format)
errconsole.setFormatter(format)
- logger.addHandler(console)
- logger.addHandler(errconsole)
+ if not bb.msg.has_console_handler(logger):
+ logger.addHandler(console)
+ logger.addHandler(errconsole)
bb.utils.set_process_name("KnottyUI")
@@ -395,7 +427,6 @@
universe = False
if not params.observe_only:
params.updateFromServer(server)
- params.updateToServer(server, os.environ.copy())
cmdline = params.parseActions()
if not cmdline:
print("Nothing to do. Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
@@ -471,11 +502,11 @@
continue
# Prefix task messages with recipe/task
- if event.taskpid in helper.running_tasks:
+ if event.taskpid in helper.running_tasks and event.levelno != format.PLAIN:
taskinfo = helper.running_tasks[event.taskpid]
event.msg = taskinfo['title'] + ': ' + event.msg
if hasattr(event, 'fn'):
- event.msg = event.fn + ': ' + event.msg
+ event.msg = event.fn + ': ' + event.msg
logger.handle(event)
continue
@@ -484,29 +515,7 @@
continue
if isinstance(event, bb.build.TaskFailed):
return_value = 1
- logfile = event.logfile
- if logfile and os.path.exists(logfile):
- termfilter.clearFooter()
- bb.error("Logfile of failure stored in: %s" % logfile)
- if includelogs and not event.errprinted:
- print("Log data follows:")
- f = open(logfile, "r")
- lines = []
- while True:
- l = f.readline()
- if l == '':
- break
- l = l.rstrip()
- if loglines:
- lines.append(' | %s' % l)
- if len(lines) > int(loglines):
- lines.pop(0)
- else:
- print('| %s' % l)
- f.close()
- if lines:
- for line in lines:
- print(line)
+ print_event_log(event, includelogs, loglines, termfilter)
if isinstance(event, bb.build.TaskBase):
logger.info(event._message)
continue
@@ -559,7 +568,7 @@
return_value = event.exitcode
if event.error:
errors = errors + 1
- logger.error("Command execution failed: %s", event.error)
+ logger.error(str(event))
main.shutdown = 2
continue
if isinstance(event, bb.command.CommandExit):
@@ -570,39 +579,16 @@
main.shutdown = 2
continue
if isinstance(event, bb.event.MultipleProviders):
- logger.info("multiple providers are available for %s%s (%s)", event._is_runtime and "runtime " or "",
- event._item,
- ", ".join(event._candidates))
- rtime = ""
- if event._is_runtime:
- rtime = "R"
- logger.info("consider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, event._item))
+ logger.info(str(event))
continue
if isinstance(event, bb.event.NoProvider):
- if event._runtime:
- r = "R"
- else:
- r = ""
-
- extra = ''
- if not event._reasons:
- if event._close_matches:
- extra = ". Close matches:\n %s" % '\n '.join(event._close_matches)
-
# For universe builds, only show these as warnings, not errors
- h = logger.warning
if not universe:
return_value = 1
errors = errors + 1
- h = logger.error
-
- if event._dependees:
- h("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s", r, event._item, ", ".join(event._dependees), r, extra)
+ logger.error(str(event))
else:
- h("Nothing %sPROVIDES '%s'%s", r, event._item, extra)
- if event._reasons:
- for reason in event._reasons:
- h("%s", reason)
+ logger.warning(str(event))
continue
if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
@@ -624,13 +610,11 @@
if isinstance(event, bb.runqueue.runQueueTaskFailed):
return_value = 1
taskfailures.append(event.taskstring)
- logger.error("Task (%s) failed with exit code '%s'",
- event.taskstring, event.exitcode)
+ logger.error(str(event))
continue
if isinstance(event, bb.runqueue.sceneQueueTaskFailed):
- logger.warning("Setscene task (%s) failed with exit code '%s' - real task will be run instead",
- event.taskstring, event.exitcode)
+ logger.warning(str(event))
continue
if isinstance(event, bb.event.DepTreeGenerated):
@@ -663,6 +647,7 @@
bb.event.MetadataEvent,
bb.event.StampUpdate,
bb.event.ConfigParsed,
+ bb.event.MultiConfigParsed,
bb.event.RecipeParsed,
bb.event.RecipePreFinalise,
bb.runqueue.runQueueEvent,
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py
index ca845a3..8690c52 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py
@@ -315,7 +315,7 @@
# also allow them to now exit with a single ^C
shutdown = 2
if isinstance(event, bb.command.CommandFailed):
- mw.appendText("Command execution failed: %s" % event.error)
+ mw.appendText(str(event))
time.sleep(2)
exitflag = True
if isinstance(event, bb.command.CommandExit):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py
index 9d14ece..0e8e9d4 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py
@@ -63,7 +63,9 @@
self.current = None
self.filter_model = model.filter_new()
self.filter_model.set_visible_func(self._filter)
- self.set_model(self.filter_model)
+ self.sort_model = self.filter_model.sort_new_with_model()
+ self.sort_model.set_sort_column_id(COL_DEP_PARENT, Gtk.SortType.ASCENDING)
+ self.set_model(self.sort_model)
self.append_column(Gtk.TreeViewColumn(label, Gtk.CellRendererText(), text=COL_DEP_PARENT))
def _filter(self, model, iter, data):
@@ -286,23 +288,7 @@
continue
if isinstance(event, bb.event.NoProvider):
- if event._runtime:
- r = "R"
- else:
- r = ""
-
- extra = ''
- if not event._reasons:
- if event._close_matches:
- extra = ". Close matches:\n %s" % '\n '.join(event._close_matches)
-
- if event._dependees:
- print("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s" % (r, event._item, ", ".join(event._dependees), r, extra))
- else:
- print("Nothing %sPROVIDES '%s'%s" % (r, event._item, extra))
- if event._reasons:
- for reason in event._reasons:
- print(reason)
+ print(str(event))
_, error = server.runCommand(["stateShutdown"])
if error:
@@ -310,7 +296,7 @@
break
if isinstance(event, bb.command.CommandFailed):
- print("Command execution failed: %s" % event.error)
+ print(str(event))
return event.exitcode
if isinstance(event, bb.command.CommandExit):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py
index 71f04fa..88cec37 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py
@@ -320,29 +320,13 @@
if isinstance(event, bb.event.CacheLoadCompleted):
continue
if isinstance(event, bb.event.MultipleProviders):
- logger.info("multiple providers are available for %s%s (%s)", event._is_runtime and "runtime " or "",
- event._item,
- ", ".join(event._candidates))
- logger.info("consider defining a PREFERRED_PROVIDER entry to match %s", event._item)
+ logger.info(str(event))
continue
if isinstance(event, bb.event.NoProvider):
errors = errors + 1
- if event._runtime:
- r = "R"
- else:
- r = ""
-
- if event._dependees:
- text = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r)
- else:
- text = "Nothing %sPROVIDES '%s'" % (r, event._item)
-
+ text = str(event)
logger.error(text)
- if event._reasons:
- for reason in event._reasons:
- logger.error("%s", reason)
- text += reason
buildinfohelper.store_log_error(text)
continue
@@ -364,8 +348,7 @@
if isinstance(event, bb.runqueue.runQueueTaskFailed):
buildinfohelper.update_and_store_task(event)
taskfailures.append(event.taskstring)
- logger.error("Task (%s) failed with exit code '%s'",
- event.taskstring, event.exitcode)
+ logger.error(str(event))
continue
if isinstance(event, (bb.runqueue.sceneQueueTaskCompleted, bb.runqueue.sceneQueueTaskFailed)):
@@ -382,7 +365,7 @@
if isinstance(event, bb.command.CommandFailed):
errors += 1
errorcode = 1
- logger.error("Command execution failed: %s", event.error)
+ logger.error(str(event))
elif isinstance(event, bb.event.BuildCompleted):
buildinfohelper.scan_image_artifacts()
buildinfohelper.clone_required_sdk_artifacts()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py
index 113fced..963c1ea 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py
@@ -61,6 +61,9 @@
self.running_tasks[event.pid]['progress'] = event.progress
self.running_tasks[event.pid]['rate'] = event.rate
self.needUpdate = True
+ else:
+ return False
+ return True
def getTasks(self):
self.needUpdate = False
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/utils.py b/import-layers/yocto-poky/bitbake/lib/bb/utils.py
index 6a44db5..c540b49 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/utils.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/utils.py
@@ -771,13 +771,14 @@
return None
renamefailed = 1
+ # os.rename needs to know the dest path ending with file name
+ # so append the file name to a path only if it's a dir specified
+ srcfname = os.path.basename(src)
+ destpath = os.path.join(dest, srcfname) if os.path.isdir(dest) \
+ else dest
+
if sstat[stat.ST_DEV] == dstat[stat.ST_DEV]:
try:
- # os.rename needs to know the dest path ending with file name
- # so append the file name to a path only if it's a dir specified
- srcfname = os.path.basename(src)
- destpath = os.path.join(dest, srcfname) if os.path.isdir(dest) \
- else dest
os.rename(src, destpath)
renamefailed = 0
except Exception as e:
@@ -791,8 +792,8 @@
didcopy = 0
if stat.S_ISREG(sstat[stat.ST_MODE]):
try: # For safety copy then move it over.
- shutil.copyfile(src, dest + "#new")
- os.rename(dest + "#new", dest)
+ shutil.copyfile(src, destpath + "#new")
+ os.rename(destpath + "#new", destpath)
didcopy = 1
except Exception as e:
print('movefile: copy', src, '->', dest, 'failed.', e)
@@ -813,9 +814,9 @@
return None
if newmtime:
- os.utime(dest, (newmtime, newmtime))
+ os.utime(destpath, (newmtime, newmtime))
else:
- os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
+ os.utime(destpath, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
newmtime = sstat[stat.ST_MTIME]
return newmtime
@@ -1502,7 +1503,7 @@
def load_plugins(logger, plugins, pluginpath):
def load_plugin(name):
- logger.debug('Loading plugin %s' % name)
+ logger.debug(1, 'Loading plugin %s' % name)
fp, pathname, description = imp.find_module(name, [pluginpath])
try:
return imp.load_module(name, fp, pathname, description)
@@ -1510,7 +1511,7 @@
if fp:
fp.close()
- logger.debug('Loading plugins from %s...' % pluginpath)
+ logger.debug(1, 'Loading plugins from %s...' % pluginpath)
expanded = (glob.glob(os.path.join(pluginpath, '*' + ext))
for ext in python_extensions)
diff --git a/import-layers/yocto-poky/bitbake/lib/bblayers/action.py b/import-layers/yocto-poky/bitbake/lib/bblayers/action.py
index cf94704..b1326e5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bblayers/action.py
+++ b/import-layers/yocto-poky/bitbake/lib/bblayers/action.py
@@ -1,7 +1,9 @@
import fnmatch
import logging
import os
+import shutil
import sys
+import tempfile
import bb.utils
@@ -32,10 +34,26 @@
sys.stderr.write("Unable to find bblayers.conf\n")
return 1
- notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdir, None)
- if notadded:
- for item in notadded:
- sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)
+ # Back up bblayers.conf to tempdir before we add layers
+ tempdir = tempfile.mkdtemp()
+ backup = tempdir + "/bblayers.conf.bak"
+ shutil.copy2(bblayers_conf, backup)
+
+ try:
+ notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdir, None)
+ if not (args.force or notadded):
+ try:
+ self.tinfoil.parseRecipes()
+ except bb.tinfoil.TinfoilUIException:
+ # Restore the back up copy of bblayers.conf
+ shutil.copy2(backup, bblayers_conf)
+ bb.fatal("Parse failure with the specified layer added")
+ else:
+ for item in notadded:
+ sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)
+ finally:
+ # Remove the back up copy of bblayers.conf
+ shutil.rmtree(tempdir)
def do_remove_layer(self, args):
"""Remove a layer from bblayers.conf."""
diff --git a/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py b/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py
index 506c110..9af385d 100644
--- a/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py
+++ b/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py
@@ -247,6 +247,7 @@
logger.plain("Adding layer \"%s\" to conf/bblayers.conf" % name)
localargs = argparse.Namespace()
localargs.layerdir = layerdir
+ localargs.force = args.force
self.do_add_layer(localargs)
else:
break
diff --git a/import-layers/yocto-poky/bitbake/lib/prserv/serv.py b/import-layers/yocto-poky/bitbake/lib/prserv/serv.py
index a7efa58..6a99728 100644
--- a/import-layers/yocto-poky/bitbake/lib/prserv/serv.py
+++ b/import-layers/yocto-poky/bitbake/lib/prserv/serv.py
@@ -6,10 +6,11 @@
import socket
import io
import sqlite3
-import bb.server.xmlrpc
+import bb.server.xmlrpcclient
import prserv
import prserv.db
import errno
+import select
logger = logging.getLogger("BitBake.PRserv")
@@ -59,6 +60,8 @@
self.register_function(self.importone, "importone")
self.register_introspection_functions()
+ self.quitpipein, self.quitpipeout = os.pipe()
+
self.requestqueue = queue.Queue()
self.handlerthread = threading.Thread(target = self.process_request_thread)
self.handlerthread.daemon = False
@@ -75,12 +78,14 @@
bb.utils.set_process_name("PRServ Handler")
- while not self.quit:
+ while not self.quitflag:
try:
(request, client_address) = self.requestqueue.get(True, 30)
except queue.Empty:
self.table.sync_if_dirty()
continue
+ if request is None:
+ continue
try:
self.finish_request(request, client_address)
self.shutdown_request(request)
@@ -100,7 +105,8 @@
def sigterm_handler(self, signum, stack):
if self.table:
self.table.sync()
- self.quit=True
+ self.quit()
+ self.requestqueue.put((None, None))
def process_request(self, request, client_address):
self.requestqueue.put((request, client_address))
@@ -136,7 +142,7 @@
return self.table.importone(version, pkgarch, checksum, value)
def ping(self):
- return not self.quit
+ return not self.quitflag
def getinfo(self):
return (self.host, self.port)
@@ -152,12 +158,17 @@
return None
def quit(self):
- self.quit=True
+ self.quitflag=True
+ os.write(self.quitpipeout, b"q")
+ os.close(self.quitpipeout)
return
def work_forever(self,):
- self.quit = False
- self.timeout = 0.5
+ self.quitflag = False
+ # This timeout applies to the poll in TCPServer, we need the select
+ # below to wake on our quit pipe closing. We only ever call into handle_request
+ # if there is data there.
+ self.timeout = 0.01
bb.utils.set_process_name("PRServ")
@@ -169,12 +180,17 @@
(self.dbfile, self.host, self.port, str(os.getpid())))
self.handlerthread.start()
- while not self.quit:
- self.handle_request()
+ while not self.quitflag:
+ ready = select.select([self.fileno(), self.quitpipein], [], [], 30)
+ if self.quitflag:
+ break
+ if self.fileno() in ready[0]:
+ self.handle_request()
self.handlerthread.join()
self.db.disconnect()
logger.info("PRServer: stopping...")
self.server_close()
+ os.close(self.quitpipein)
return
def start(self):
@@ -182,6 +198,7 @@
pid = self.daemonize()
else:
pid = self.fork()
+ self.pid = pid
# Ensure both the parent sees this and the child from the work_forever log entry above
logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
@@ -300,7 +317,7 @@
host, port = singleton.getinfo()
self.host = host
self.port = port
- self.connection, self.transport = bb.server.xmlrpc._create_server(self.host, self.port)
+ self.connection, self.transport = bb.server.xmlrpcclient._create_server(self.host, self.port)
def terminate(self):
try:
@@ -428,6 +445,9 @@
def auto_start(d):
global singleton
+ # Shutdown any existing PR Server
+ auto_shutdown()
+
host_params = list(filter(None, (d.getVar('PRSERV_HOST') or '').split(':')))
if not host_params:
return None
@@ -464,7 +484,7 @@
logger.critical("PRservice %s:%d not available" % (host, port))
raise PRServiceConfigError
-def auto_shutdown(d=None):
+def auto_shutdown():
global singleton
if singleton:
host, port = singleton.getinfo()
@@ -472,6 +492,11 @@
PRServerConnection(host, port).terminate()
except:
logger.critical("Stop PRService %s:%d failed" % (host,port))
+
+ try:
+ os.waitpid(singleton.prserv.pid, 0)
+ except ChildProcessError:
+ pass
singleton = None
def ping(host, port):
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py
index 64722f2..888175d 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py
@@ -1,7 +1,7 @@
#
# BitBake Toaster Implementation
#
-# Copyright (C) 2014 Intel Corporation
+# Copyright (C) 2014-2017 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -17,9 +17,11 @@
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-from django.conf.urls import patterns, include, url
+from django.conf.urls import include, url
-urlpatterns = patterns('bldcollector.views',
+import bldcollector.views
+
+urlpatterns = [
# landing point for pushing a bitbake_eventlog.json file to this toaster instace
- url(r'^eventfile$', 'eventfile', name='eventfile'),
- )
+ url(r'^eventfile$', bldcollector.views.eventfile, name='eventfile'),
+]
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py
index 912f67b..5195600 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py
@@ -37,8 +37,8 @@
"""
def __init__(self, be):
- import bb.server.xmlrpc
- self.connection = bb.server.xmlrpc._create_server(be.bbaddress,
+ import bb.server.xmlrpcclient
+ self.connection = bb.server.xmlrpcclient._create_server(be.bbaddress,
int(be.bbport))[0]
def _runCommand(self, command):
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
index 1207102..4c17562 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
@@ -24,6 +24,7 @@
import sys
import re
import shutil
+import time
from django.db import transaction
from django.db.models import Q
from bldcontrol.models import BuildEnvironment, BRLayer, BRVariable, BRTarget, BRBitbake
@@ -51,12 +52,14 @@
self.pokydirname = None
self.islayerset = False
- def _shellcmd(self, command, cwd=None, nowait=False):
+ def _shellcmd(self, command, cwd=None, nowait=False,env=None):
if cwd is None:
cwd = self.be.sourcedir
+ if env is None:
+ env=os.environ.copy()
- logger.debug("lbc_shellcmmd: (%s) %s" % (cwd, command))
- p = subprocess.Popen(command, cwd = cwd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+ logger.debug("lbc_shellcmd: (%s) %s" % (cwd, command))
+ p = subprocess.Popen(command, cwd = cwd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
if nowait:
return
(out,err) = p.communicate()
@@ -85,6 +88,11 @@
return local_checkout_path
+ def setCloneStatus(self,bitbake,status,total,current):
+ bitbake.req.build.repos_cloned=current
+ bitbake.req.build.repos_to_clone=total
+ bitbake.req.build.save()
+
def setLayers(self, bitbake, layers, targets):
""" a word of attention: by convention, the first layer for any build will be poky! """
@@ -92,6 +100,8 @@
layerlist = []
nongitlayerlist = []
+ git_env = os.environ.copy()
+ # (note: add custom environment settings here)
# set layers in the layersource
@@ -132,7 +142,7 @@
cached_layers = {}
try:
- for remotes in self._shellcmd("git remote -v", self.be.sourcedir).split("\n"):
+ for remotes in self._shellcmd("git remote -v", self.be.sourcedir,env=git_env).split("\n"):
try:
remote = remotes.split("\t")[1].split(" ")[0]
if remote not in cached_layers:
@@ -147,7 +157,13 @@
logger.info("Using pre-checked out source for layer %s", cached_layers)
# 3. checkout the repositories
+ clone_count=0
+ clone_total=len(gitrepos.keys())
+ self.setCloneStatus(bitbake,'Started',clone_total,clone_count)
for giturl, commit in gitrepos.keys():
+ self.setCloneStatus(bitbake,'progress',clone_total,clone_count)
+ clone_count += 1
+
localdirname = os.path.join(self.be.sourcedir, self.getGitCloneDirectory(giturl, commit))
logger.debug("localhostbecontroller: giturl %s:%s checking out in current directory %s" % (giturl, commit, localdirname))
@@ -155,7 +171,7 @@
if os.path.exists(localdirname):
try:
localremotes = self._shellcmd("git remote -v",
- localdirname)
+ localdirname,env=git_env)
if not giturl in localremotes and commit != 'HEAD':
raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl))
except ShellCmdException:
@@ -165,18 +181,18 @@
else:
if giturl in cached_layers:
logger.debug("localhostbecontroller git-copying %s to %s" % (cached_layers[giturl], localdirname))
- self._shellcmd("git clone \"%s\" \"%s\"" % (cached_layers[giturl], localdirname))
- self._shellcmd("git remote remove origin", localdirname)
- self._shellcmd("git remote add origin \"%s\"" % giturl, localdirname)
+ self._shellcmd("git clone \"%s\" \"%s\"" % (cached_layers[giturl], localdirname),env=git_env)
+ self._shellcmd("git remote remove origin", localdirname,env=git_env)
+ self._shellcmd("git remote add origin \"%s\"" % giturl, localdirname,env=git_env)
else:
logger.debug("localhostbecontroller: cloning %s in %s" % (giturl, localdirname))
- self._shellcmd('git clone "%s" "%s"' % (giturl, localdirname))
+ self._shellcmd('git clone "%s" "%s"' % (giturl, localdirname),env=git_env)
# branch magic name "HEAD" will inhibit checkout
if commit != "HEAD":
logger.debug("localhostbecontroller: checking out commit %s to %s " % (commit, localdirname))
ref = commit if re.match('^[a-fA-F0-9]+$', commit) else 'origin/%s' % commit
- self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname)
+ self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname,env=git_env)
# take the localdirname as poky dir if we can find the oe-init-build-env
if self.pokydirname is None and os.path.exists(os.path.join(localdirname, "oe-init-build-env")):
@@ -186,7 +202,7 @@
# make sure we have a working bitbake
if not os.path.exists(os.path.join(self.pokydirname, 'bitbake')):
logger.debug("localhostbecontroller: checking bitbake into the poky dirname %s " % self.pokydirname)
- self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')))
+ self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')),env=git_env)
# verify our repositories
for name, dirpath in gitrepos[(giturl, commit)]:
@@ -198,6 +214,7 @@
if name != "bitbake":
layerlist.append(localdirpath.rstrip("/"))
+ self.setCloneStatus(bitbake,'complete',clone_total,clone_count)
logger.debug("localhostbecontroller: current layer list %s " % pformat(layerlist))
if self.pokydirname is None and os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")):
@@ -319,16 +336,32 @@
conf.write('%s="%s"\n' % (var.name, var.value))
conf.write('INHERIT+="toaster buildhistory"')
+ # clean the Toaster to build environment
+ env_clean = 'unset BBPATH;' # clean BBPATH for <= YP-2.4.0
+
# run bitbake server from the clone
bitbake = os.path.join(self.pokydirname, 'bitbake', 'bin', 'bitbake')
toasterlayers = os.path.join(builddir,"conf/toaster-bblayers.conf")
- self._shellcmd('bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s '
- '--server-only -t xmlrpc -B 0.0.0.0:0\"' % (oe_init,
+ self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s '
+ '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init,
builddir, bitbake, confpath, toasterlayers), self.be.sourcedir)
# read port number from bitbake.lock
- self.be.bbport = ""
+ self.be.bbport = -1
bblock = os.path.join(builddir, 'bitbake.lock')
+ # allow 10 seconds for bb lock file to appear but also be populated
+ for lock_check in range(10):
+ if not os.path.exists(bblock):
+ logger.debug("localhostbecontroller: waiting for bblock file to appear")
+ time.sleep(1)
+ continue
+ if 10 < os.stat(bblock).st_size:
+ break
+ logger.debug("localhostbecontroller: waiting for bblock content to appear")
+ time.sleep(1)
+ else:
+ raise BuildSetupException("Cannot find bitbake server lock file '%s'. Aborting." % bblock)
+
with open(bblock) as fplock:
for line in fplock:
if ":" in line:
@@ -336,7 +369,7 @@
logger.debug("localhostbecontroller: bitbake port %s", self.be.bbport)
break
- if not self.be.bbport:
+ if -1 == self.be.bbport:
raise BuildSetupException("localhostbecontroller: can't read bitbake port from %s" % bblock)
self.be.bbaddress = "localhost"
@@ -357,10 +390,11 @@
log = os.path.join(builddir, 'toaster_ui.log')
local_bitbake = os.path.join(os.path.dirname(os.getenv('BBBASEDIR')),
'bitbake')
- self._shellcmd(['bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:-1" '
- '%s %s -u toasterui --token="" >>%s 2>&1;'
- 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:-1 %s -m)&\"' \
- % (brbe, local_bitbake, bbtargets, log, bitbake)],
+ self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" '
+ '%s %s -u toasterui --read %s --read %s --token="" >>%s 2>&1;'
+ 'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s -m)&\"' \
+ % (env_clean, brbe, self.be.bbport, local_bitbake, bbtargets, confpath, toasterlayers, log,
+ self.be.bbport, bitbake,)],
builddir, nowait=True)
logger.debug('localhostbecontroller: Build launched, exiting. '
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
index 2ed994f..582114a 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
@@ -1,4 +1,4 @@
-from django.core.management.base import NoArgsCommand, CommandError
+from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from django.core.management import call_command
@@ -18,7 +18,7 @@
return os.path.dirname(path)
-class Command(NoArgsCommand):
+class Command(BaseCommand):
args = ""
help = "Verifies that the configured settings are valid and usable, or prompts the user to fix the settings."
@@ -75,7 +75,10 @@
call_command("loaddata", "settings")
template_conf = os.environ.get("TEMPLATECONF", "")
- if "poky" in template_conf:
+ if ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0:
+ # only use the custom settings
+ pass
+ elif "poky" in template_conf:
print("Loading poky configuration")
call_command("loaddata", "poky")
else:
@@ -152,7 +155,7 @@
- def handle_noargs(self, **options):
+ def handle(self, **options):
retval = 0
retval += self._verify_build_environment()
retval += self._verify_default_settings()
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
index df11f9d..791e53e 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
@@ -1,4 +1,4 @@
-from django.core.management.base import NoArgsCommand
+from django.core.management.base import BaseCommand
from django.db import transaction
from django.db.models import Q
@@ -16,7 +16,7 @@
logger = logging.getLogger("toaster")
-class Command(NoArgsCommand):
+class Command(BaseCommand):
args = ""
help = "Schedules and executes build requests as possible. "\
"Does not return (interrupt with Ctrl-C)"
@@ -79,6 +79,14 @@
br.save()
bec.be.lock = BuildEnvironment.LOCK_FREE
bec.be.save()
+ # Cancel the pending build and report the exception to the UI
+ log_object = LogMessage.objects.create(
+ build = br.build,
+ level = LogMessage.EXCEPTION,
+ message = errmsg)
+ log_object.save()
+ br.build.outcome = Build.FAILED
+ br.build.save()
def archive(self):
for br in BuildRequest.objects.filter(state=BuildRequest.REQ_ARCHIVE):
@@ -168,7 +176,7 @@
except Exception as e:
logger.warn("runbuilds: schedule exception %s" % str(e))
- def handle_noargs(self, **options):
+ def handle(self, **options):
pidfile_path = os.path.join(os.environ.get("BUILDDIR", "."),
".runbuilds.pid")
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/custom_toaster_append.sh_sample b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/custom_toaster_append.sh_sample
new file mode 100755
index 0000000..8c4e163
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/custom_toaster_append.sh_sample
@@ -0,0 +1,49 @@
+#!/bin/bash
+
+# Copyright (C) 2017 Intel Corp.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# This is sample software. Rename it to 'custom_toaster_append.sh' and
+# enable the respective custom sections.
+
+verbose=0
+if [ $verbose -ne 0 ] ; then
+ echo "custom_toaster_append.sh:$*"
+fi
+
+if [ "toaster_prepend" = "$1" ] ; then
+ echo "Add custom actions here when Toaster script is started"
+fi
+
+if [ "web_start_postpend" = "$1" ] ; then
+ echo "Add custom actions here after Toaster web service is started"
+fi
+
+if [ "web_stop_postpend" = "$1" ] ; then
+ echo "Add custom actions here after Toaster web service is stopped"
+fi
+
+if [ "noweb_start_postpend" = "$1" ] ; then
+ echo "Add custom actions here after Toaster (no web) service is started"
+fi
+
+if [ "noweb_stop_postpend" = "$1" ] ; then
+ echo "Add custom actions here after Toaster (no web) service is stopped"
+fi
+
+if [ "toaster_postpend" = "$1" ] ; then
+ echo "Add custom actions here after Toaster script is done"
+fi
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml
index 66c3595..00720c3 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml
@@ -8,9 +8,9 @@
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
- <field type="CharField" name="name">pyro</field>
+ <field type="CharField" name="name">rocko</field>
<field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
- <field type="CharField" name="branch">1.34</field>
+ <field type="CharField" name="branch">1.36</field>
</object>
<object model="orm.bitbakeversion" pk="2">
<field type="CharField" name="name">HEAD</field>
@@ -25,11 +25,11 @@
<!-- Releases available -->
<object model="orm.release" pk="1">
- <field type="CharField" name="name">pyro</field>
- <field type="CharField" name="description">Openembedded Pyro</field>
+ <field type="CharField" name="name">rocko</field>
+ <field type="CharField" name="description">Openembedded Rocko</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
- <field type="CharField" name="branch_name">pyro</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=pyro\">OpenEmbedded Pyro</a> branch.</field>
+ <field type="CharField" name="branch_name">rocko</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=rocko\">OpenEmbedded Rocko</a> branch.</field>
</object>
<object model="orm.release" pk="2">
<field type="CharField" name="name">local</field>
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml
index 7827aac..2f39d77 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml
@@ -8,9 +8,9 @@
<!-- Bitbake versions which correspond to the metadata release -->
<object model="orm.bitbakeversion" pk="1">
- <field type="CharField" name="name">pyro</field>
+ <field type="CharField" name="name">rocko</field>
<field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
- <field type="CharField" name="branch">pyro</field>
+ <field type="CharField" name="branch">rocko</field>
<field type="CharField" name="dirpath">bitbake</field>
</object>
<object model="orm.bitbakeversion" pk="2">
@@ -29,11 +29,11 @@
<!-- Releases available -->
<object model="orm.release" pk="1">
- <field type="CharField" name="name">pyro</field>
- <field type="CharField" name="description">Yocto Project 2.3 "Pyro"</field>
+ <field type="CharField" name="name">rocko</field>
+ <field type="CharField" name="description">Yocto Project 2.4 "Rocko"</field>
<field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
- <field type="CharField" name="branch_name">pyro</field>
- <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=pyro">Yocto Project Pyro branch</a>.</field>
+ <field type="CharField" name="branch_name">rocko</field>
+ <field type="TextField" name="helptext">Toaster will run your builds using the tip of the <a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko">Yocto Project Rocko branch</a>.</field>
</object>
<object model="orm.release" pk="2">
<field type="CharField" name="name">local</field>
@@ -105,7 +105,7 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">pyro</field>
+ <field type="CharField" name="branch">rocko</field>
<field type="CharField" name="dirpath">meta</field>
</object>
<object model="orm.layer_version" pk="2">
@@ -136,7 +136,7 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">pyro</field>
+ <field type="CharField" name="branch">rocko</field>
<field type="CharField" name="dirpath">meta-poky</field>
</object>
<object model="orm.layer_version" pk="5">
@@ -167,7 +167,7 @@
<field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
<field type="IntegerField" name="layer_source">0</field>
<field rel="ManyToOneRel" to="orm.release" name="release">1</field>
- <field type="CharField" name="branch">pyro</field>
+ <field type="CharField" name="branch">rocko</field>
<field type="CharField" name="dirpath">meta-yocto-bsp</field>
</object>
<object model="orm.layer_version" pk="8">
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py
index 482908d..efc6b3a 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py
@@ -4,7 +4,7 @@
#
# BitBake Toaster Implementation
#
-# Copyright (C) 2016 Intel Corporation
+# Copyright (C) 2016-2017 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -19,10 +19,12 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-from django.core.management.base import NoArgsCommand
+from django.core.management.base import BaseCommand
from orm.models import LayerSource, Layer, Release, Layer_Version
from orm.models import LayerVersionDependency, Machine, Recipe
+from orm.models import Distro
+from orm.models import ToasterSetting
import os
import sys
@@ -56,7 +58,7 @@
self.signal = False
-class Command(NoArgsCommand):
+class Command(BaseCommand):
args = ""
help = "Updates locally cached information from a layerindex server"
@@ -80,6 +82,8 @@
os.system('setterm -cursor off')
self.apiurl = DEFAULT_LAYERINDEX_SERVER
+ if ToasterSetting.objects.filter(name='CUSTOM_LAYERINDEX_SERVER').count() == 1:
+ self.apiurl = ToasterSetting.objects.get(name = 'CUSTOM_LAYERINDEX_SERVER').value
assert self.apiurl is not None
try:
@@ -91,7 +95,9 @@
proxy_settings = os.environ.get("http_proxy", None)
- def _get_json_response(apiurl=DEFAULT_LAYERINDEX_SERVER):
+ def _get_json_response(apiurl=None):
+ if None == apiurl:
+ apiurl=self.apiurl
http_progress = Spinner()
http_progress.start()
@@ -251,6 +257,24 @@
depends_on=lvd)
self.mini_progress("Layer version dependencies", i, total)
+ # update Distros
+ logger.info("Fetching distro information")
+ distros_info = _get_json_response(
+ apilinks['distros'] + "?filter=layerbranch__branch__name:%s" %
+ "OR".join(whitelist_branch_names))
+
+ total = len(distros_info)
+ for i, di in enumerate(distros_info):
+ distro, created = Distro.objects.get_or_create(
+ name=di['name'],
+ layer_version=Layer_Version.objects.get(
+ pk=li_layer_branch_id_to_toaster_lv_id[di['layerbranch']]))
+ distro.up_date = di['updated']
+ distro.name = di['name']
+ distro.description = di['description']
+ distro.save()
+ self.mini_progress("distros", i, total)
+
# update machines
logger.info("Fetching machine information")
machines_info = _get_json_response(
@@ -309,5 +333,5 @@
os.system('setterm -cursor on')
- def handle_noargs(self, **options):
+ def handle(self, **options):
self.update()
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0016_clone_progress.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0016_clone_progress.py
new file mode 100644
index 0000000..cd4023b
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0016_clone_progress.py
@@ -0,0 +1,24 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('orm', '0015_layer_local_source_dir'),
+ ]
+
+ operations = [
+ migrations.AddField(
+ model_name='build',
+ name='repos_cloned',
+ field=models.IntegerField(default=1),
+ ),
+ migrations.AddField(
+ model_name='build',
+ name='repos_to_clone',
+ field=models.IntegerField(default=1), # (default off)
+ ),
+ ]
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0017_distro_clone.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0017_distro_clone.py
new file mode 100644
index 0000000..d3c5901
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0017_distro_clone.py
@@ -0,0 +1,25 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+class Migration(migrations.Migration):
+
+ dependencies = [
+ ('orm', '0016_clone_progress'),
+ ]
+
+ operations = [
+ migrations.CreateModel(
+ name='Distro',
+ fields=[
+ ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
+ ('up_id', models.IntegerField(default=None, null=True)),
+ ('up_date', models.DateTimeField(default=None, null=True)),
+ ('name', models.CharField(max_length=255)),
+ ('description', models.CharField(max_length=255)),
+ ('layer_version', models.ForeignKey(to='orm.Layer_Version')),
+ ],
+ ),
+ ]
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py
index a49f9a4..3a7dff8 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py
@@ -321,6 +321,22 @@
return queryset
+ def get_available_distros(self):
+ """ Returns QuerySet of all Distros which are provided by the
+ Layers currently added to the Project """
+ queryset = Distro.objects.filter(
+ layer_version__in=self.get_project_layer_versions())
+
+ return queryset
+
+ def get_all_compatible_distros(self):
+ """ Returns QuerySet of all the compatible Wind River distros available to the
+ project including ones from Layers not currently added """
+ queryset = Distro.objects.filter(
+ layer_version__in=self.get_all_compatible_layer_versions())
+
+ return queryset
+
def get_available_recipes(self):
""" Returns QuerySet of all the recipes that are provided by layers
added to this project """
@@ -435,7 +451,13 @@
recipes_to_parse = models.IntegerField(default=1)
# number of recipes parsed so far for this build
- recipes_parsed = models.IntegerField(default=0)
+ recipes_parsed = models.IntegerField(default=1)
+
+ # number of repos to clone for this build
+ repos_to_clone = models.IntegerField(default=1)
+
+ # number of repos cloned so far for this build (default off)
+ repos_cloned = models.IntegerField(default=1)
@staticmethod
def get_recent(project=None):
@@ -486,7 +508,7 @@
tf = Task.objects.filter(build = self)
tfc = tf.count()
if tfc > 0:
- completeper = tf.exclude(order__isnull=True).count()*100 // tfc
+ completeper = tf.exclude(outcome=Task.OUTCOME_NA).count()*100 // tfc
else:
completeper = 0
return completeper
@@ -667,6 +689,13 @@
else:
return False
+ def is_cloning(self):
+ """
+ True if the build is still cloning repos
+ """
+ return self.outcome == Build.IN_PROGRESS and \
+ self.repos_cloned < self.repos_to_clone
+
def is_parsing(self):
"""
True if the build is still parsing recipes
@@ -680,10 +709,11 @@
tasks.
Note that the mechanism for testing whether a Task is "done" is whether
- its order field is set, as per the completeper() method.
+ its outcome field is set, as per the completeper() method.
"""
return self.outcome == Build.IN_PROGRESS and \
- self.task_build.filter(order__isnull=False).count() == 0
+ self.task_build.exclude(outcome=Task.OUTCOME_NA).count() == 0
+
def get_state(self):
"""
@@ -698,6 +728,8 @@
return 'Cancelling';
elif self.is_queued():
return 'Queued'
+ elif self.is_cloning():
+ return 'Cloning'
elif self.is_parsing():
return 'Parsing'
elif self.is_starting():
@@ -1485,12 +1517,12 @@
return self._handle_url_path(self.layer.vcs_web_tree_base_url, '')
def get_vcs_reference(self):
+ if self.commit is not None and len(self.commit) > 0:
+ return self.commit
if self.branch is not None and len(self.branch) > 0:
return self.branch
if self.release is not None:
return self.release.name
- if self.commit is not None and len(self.commit) > 0:
- return self.commit
return 'N/A'
def get_detailspage_url(self, project_id=None):
@@ -1626,7 +1658,7 @@
def get_base_recipe_file(self):
"""Get the base recipe file path if it exists on the file system"""
- path_schema_one = "%s/%s" % (self.base_recipe.layer_version.dirpath,
+ path_schema_one = "%s/%s" % (self.base_recipe.layer_version.local_path,
self.base_recipe.file_path)
path_schema_two = self.base_recipe.file_path
@@ -1780,6 +1812,21 @@
except FileNotFoundError:
logger.info("Stopping existing runbuilds: no current process found")
+class Distro(models.Model):
+ search_allowed_fields = ["name", "description", "layer_version__layer__name"]
+ up_date = models.DateTimeField(null = True, default = None)
+
+ layer_version = models.ForeignKey('Layer_Version')
+ name = models.CharField(max_length=255)
+ description = models.CharField(max_length=255)
+
+ def get_vcs_distro_file_link_url(self):
+ path = self.name+'.conf'
+ return self.layer_version.get_vcs_file_link_url(path)
+
+ def __unicode__(self):
+ return "Distro " + self.name + "(" + self.description + ")"
+
django.db.models.signals.post_save.connect(invalidate_cache)
django.db.models.signals.post_delete.connect(invalidate_cache)
django.db.models.signals.m2m_changed.connect(invalidate_cache)
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py
index 1a6507c..ab6ba69 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py
@@ -18,6 +18,7 @@
# Please run flake8 on this file before sending patches
+import os
import re
import logging
import json
@@ -28,7 +29,7 @@
from orm.models import Recipe, CustomImageRecipe, CustomImagePackage
from orm.models import Layer, Target, Package, Package_Dependency
from orm.models import ProjectVariable
-from bldcontrol.models import BuildRequest
+from bldcontrol.models import BuildRequest, BuildEnvironment
from bldcontrol import bbcontroller
from django.http import HttpResponse, JsonResponse
@@ -509,6 +510,27 @@
(tpackage.package.name, e))
pass
+ # pre-create layer directory structure, so that other builds
+ # are not blocked by this new recipe dependecy
+ # NOTE: this is parallel code to 'localhostbecontroller.py'
+ be = BuildEnvironment.objects.all()[0]
+ layerpath = os.path.join(be.builddir,
+ CustomImageRecipe.LAYER_NAME)
+ for name in ("conf", "recipes"):
+ path = os.path.join(layerpath, name)
+ if not os.path.isdir(path):
+ os.makedirs(path)
+ # pre-create layer.conf
+ config = os.path.join(layerpath, "conf", "layer.conf")
+ if not os.path.isfile(config):
+ with open(config, "w") as conf:
+ conf.write('BBPATH .= ":${LAYERDIR}"\nBBFILES += "${LAYERDIR}/recipes/*.bb"\n')
+ # pre-create new image's recipe file
+ recipe_path = os.path.join(layerpath, "recipes", "%s.bb" %
+ recipe.name)
+ with open(recipe_path, "w") as recipef:
+ recipef.write(recipe.generate_recipe_file_contents())
+
return JsonResponse(
{"error": "ok",
"packages": recipe.get_all_packages().count(),
@@ -752,7 +774,6 @@
return error_response("Package %s not found in excludes"
" but was in included list" %
package.name)
-
else:
recipe.appends_set.add(package)
# Make sure that package is not in the excludes set
@@ -760,26 +781,27 @@
recipe.excludes_set.remove(package)
except:
pass
- # Add the dependencies we think will be added to the recipe
- # as a result of appending this package.
- # TODO this should recurse down the entire deps tree
- for dep in package.package_dependencies_source.all_depends():
- try:
- cust_package = CustomImagePackage.objects.get(
- name=dep.depends_on.name)
- recipe.includes_set.add(cust_package)
- try:
- # When adding the pre-requisite package, make
- # sure it's not in the excluded list from a
- # prior removal.
- recipe.excludes_set.remove(cust_package)
- except package.DoesNotExist:
- # Don't care if the package had never been excluded
- pass
- except:
- logger.warning("Could not add package's suggested"
- "dependencies to the list")
+ # Add the dependencies we think will be added to the recipe
+ # as a result of appending this package.
+ # TODO this should recurse down the entire deps tree
+ for dep in package.package_dependencies_source.all_depends():
+ try:
+ cust_package = CustomImagePackage.objects.get(
+ name=dep.depends_on.name)
+
+ recipe.includes_set.add(cust_package)
+ try:
+ # When adding the pre-requisite package, make
+ # sure it's not in the excluded list from a
+ # prior removal.
+ recipe.excludes_set.remove(cust_package)
+ except package.DoesNotExist:
+ # Don't care if the package had never been excluded
+ pass
+ except:
+ logger.warning("Could not add package's suggested"
+ "dependencies to the list")
return JsonResponse({"error": "ok"})
def delete(self, request, *args, **kwargs):
@@ -797,22 +819,24 @@
recipe.excludes_set.add(package)
else:
recipe.appends_set.remove(package)
- all_current_packages = recipe.get_all_packages()
- reverse_deps_dictlist = self._get_all_dependents(
- package.pk,
- all_current_packages)
+ # remove dependencies as well
+ all_current_packages = recipe.get_all_packages()
- ids = [entry['pk'] for entry in reverse_deps_dictlist]
- reverse_deps = CustomImagePackage.objects.filter(id__in=ids)
- for r in reverse_deps:
- try:
- if r.id in included_packages:
- recipe.excludes_set.add(r)
- else:
- recipe.appends_set.remove(r)
- except:
- pass
+ reverse_deps_dictlist = self._get_all_dependents(
+ package.pk,
+ all_current_packages)
+
+ ids = [entry['pk'] for entry in reverse_deps_dictlist]
+ reverse_deps = CustomImagePackage.objects.filter(id__in=ids)
+ for r in reverse_deps:
+ try:
+ if r.id in included_packages:
+ recipe.excludes_set.add(r)
+ else:
+ recipe.appends_set.remove(r)
+ except:
+ pass
return JsonResponse({"error": "ok"})
except CustomImageRecipe.DoesNotExist:
@@ -874,6 +898,12 @@
machinevar.value = request.POST['machineName']
machinevar.save()
+ # Distro name change
+ if 'distroName' in request.POST:
+ distrovar = prj.projectvariable_set.get(name="DISTRO")
+ distrovar.value = request.POST['distroName']
+ distrovar.save()
+
return JsonResponse({"error": "ok"})
def get(self, request, *args, **kwargs):
@@ -960,10 +990,11 @@
except ProjectVariable.DoesNotExist:
data["machine"] = None
try:
- data["distro"] = project.projectvariable_set.get(
- name="DISTRO").value
+ data["distro"] = {"name":
+ project.projectvariable_set.get(
+ name="DISTRO").value}
except ProjectVariable.DoesNotExist:
- data["distro"] = "-- not set yet"
+ data["distro"] = None
data['error'] = "ok"
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
index 73d0935..c0c5fa9 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
@@ -61,6 +61,12 @@
return (cached.recipes_parsed_percentage !== build.recipes_parsed_percentage);
}
+ // returns true if the number of repos cloned/to clone changed
+ function cloneProgressChanged(build) {
+ var cached = getCached(build);
+ return (cached.repos_cloned_percentage !== build.repos_cloned_percentage);
+ }
+
function refreshMostRecentBuilds(){
libtoaster.getMostRecentBuilds(
libtoaster.ctx.mostRecentBuildsUrl,
@@ -100,6 +106,15 @@
container.html(html);
}
+ else if (cloneProgressChanged(build)) {
+ // update the clone progress text
+ selector = '#repos-cloned-percentage-' + build.id;
+ $(selector).html(build.repos_cloned_percentage);
+
+ // update the recipe progress bar
+ selector = '#repos-cloned-percentage-bar-' + build.id;
+ $(selector).width(build.repos_cloned_percentage + '%');
+ }
else if (tasksProgressChanged(build)) {
// update the task progress text
selector = '#build-pc-done-' + build.id;
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js
index 21adf81..506471e 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js
@@ -15,6 +15,13 @@
var machineInputForm = $("#machine-input-form");
var invalidMachineNameHelp = $("#invalid-machine-name-help");
+ var distroChangeInput = $("#distro-change-input");
+ var distroChangeBtn = $("#distro-change-btn");
+ var distroForm = $("#select-distro-form");
+ var distroChangeFormToggle = $("#change-distro-toggle");
+ var distroNameTitle = $("#project-distro-name");
+ var distroChangeCancel = $("#cancel-distro-change");
+
var freqBuildBtn = $("#freq-build-btn");
var freqBuildList = $("#freq-build-list");
@@ -26,6 +33,7 @@
var currentLayerAddSelection;
var currentMachineAddSelection = "";
+ var currentDistroAddSelection = "";
var urlParams = libtoaster.parseUrlParams();
@@ -45,6 +53,17 @@
updateMachineName(prjInfo.machine.name);
}
+ /* If we're receiving a distro set from the url and it's different from
+ * our current distro then activate set machine sequence.
+ */
+ if (urlParams.hasOwnProperty('setDistro') &&
+ urlParams.setDistro !== prjInfo.distro.name){
+ distroChangeInput.val(urlParams.setDistro);
+ distroChangeBtn.click();
+ } else {
+ updateDistroName(prjInfo.distro.name);
+ }
+
/* Now we're really ready show the page */
$("#project-page").show();
@@ -278,6 +297,60 @@
});
+ /* Change distro functionality */
+
+ distroChangeFormToggle.click(function(){
+ distroForm.slideDown();
+ distroNameTitle.hide();
+ $(this).hide();
+ });
+
+ distroChangeCancel.click(function(){
+ distroForm.slideUp(function(){
+ distroNameTitle.show();
+ distroChangeFormToggle.show();
+ });
+ });
+
+ function updateDistroName(distroName){
+ distroChangeInput.val(distroName);
+ distroNameTitle.text(distroName);
+ }
+
+ libtoaster.makeTypeahead(distroChangeInput,
+ libtoaster.ctx.distrosTypeAheadUrl,
+ { }, function(item){
+ currentDistroAddSelection = item.name;
+ distroChangeBtn.removeAttr("disabled");
+ });
+
+ distroChangeBtn.click(function(e){
+ e.preventDefault();
+ /* We accept any value regardless of typeahead selection or not */
+ if (distroChangeInput.val().length === 0)
+ return;
+
+ currentDistroAddSelection = distroChangeInput.val();
+
+ libtoaster.editCurrentProject(
+ { distroName : currentDistroAddSelection },
+ function(){
+ /* Success machine changed */
+ updateDistroName(currentDistroAddSelection);
+ distroChangeCancel.click();
+
+ /* Show the alert message */
+ var message = $('<span>You have changed the distro to: <strong><span id="notify-machine-name"></span></strong></span>');
+ message.find("#notify-machine-name").text(currentDistroAddSelection);
+ libtoaster.showChangeNotification(message);
+ },
+ function(){
+ /* Failed machine changed */
+ console.warn("Failed to change distro");
+ });
+ });
+
+
/* Change release functionality */
function updateProjectRelease(release){
releaseTitle.text(release.description);
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
index 92ab2d6..69220aa 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
@@ -73,14 +73,14 @@
newBuildTargetBuildBtn.click(function (e) {
e.preventDefault();
- if (!newBuildTargetInput.val()) {
+ if (!newBuildTargetInput.val().trim()) {
return;
}
/* We use the value of the input field so as to maintain any command also
* added e.g. core-image-minimal:clean and because we can build targets
* that toaster doesn't yet know about
*/
- selectedTarget = { name: newBuildTargetInput.val() };
+ selectedTarget = { name: newBuildTargetInput.val().trim() };
/* Fire off the build */
libtoaster.startABuild(null, selectedTarget.name,
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py
index e2d23c1..dca2fa2 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py
@@ -23,6 +23,7 @@
from orm.models import Recipe, ProjectLayer, Layer_Version, Machine, Project
from orm.models import CustomImageRecipe, Package, Target, Build, LogMessage, Task
from orm.models import CustomImagePackage, Package_DependencyManager
+from orm.models import Distro
from django.db.models import Q, Max, Sum, Count, When, Case, Value, IntegerField
from django.conf.urls import url
from django.core.urlresolvers import reverse, resolve
@@ -1536,3 +1537,93 @@
context['build_in_progress_none_completed'] = False
return context
+
+
+class DistrosTable(ToasterTable):
+ """Table of Distros in Toaster"""
+
+ def __init__(self, *args, **kwargs):
+ super(DistrosTable, self).__init__(*args, **kwargs)
+ self.empty_state = "Toaster has no distro information for this project. Sadly, distro information cannot be obtained from builds, so this page will remain empty."
+ self.title = "Compatible Distros"
+ self.default_orderby = "name"
+
+ def get_context_data(self, **kwargs):
+ context = super(DistrosTable, self).get_context_data(**kwargs)
+ context['project'] = Project.objects.get(pk=kwargs['pid'])
+ return context
+
+ def setup_filters(self, *args, **kwargs):
+ project = Project.objects.get(pk=kwargs['pid'])
+
+ in_current_project_filter = TableFilter(
+ "in_current_project",
+ "Filter by project Distros"
+ )
+
+ in_project_action = TableFilterActionToggle(
+ "in_project",
+ "Distro provided by layers added to this project",
+ ProjectFilters.in_project(self.project_layers)
+ )
+
+ not_in_project_action = TableFilterActionToggle(
+ "not_in_project",
+ "Distros provided by layers not added to this project",
+ ProjectFilters.not_in_project(self.project_layers)
+ )
+
+ in_current_project_filter.add_action(in_project_action)
+ in_current_project_filter.add_action(not_in_project_action)
+ self.add_filter(in_current_project_filter)
+
+ def setup_queryset(self, *args, **kwargs):
+ prj = Project.objects.get(pk = kwargs['pid'])
+ self.queryset = prj.get_all_compatible_distros()
+ self.queryset = self.queryset.order_by(self.default_orderby)
+
+ self.static_context_extra['current_layers'] = \
+ self.project_layers = \
+ prj.get_project_layer_versions(pk=True)
+
+ def setup_columns(self, *args, **kwargs):
+
+ self.add_column(title="Distro",
+ hideable=False,
+ orderable=True,
+ field_name="name")
+
+ self.add_column(title="Description",
+ field_name="description")
+
+ layer_link_template = '''
+ <a href="{% url 'layerdetails' extra.pid data.layer_version.id %}">
+ {{data.layer_version.layer.name}}</a>
+ '''
+
+ self.add_column(title="Layer",
+ static_data_name="layer_version__layer__name",
+ static_data_template=layer_link_template,
+ orderable=True)
+
+ self.add_column(title="Git revision",
+ help_text="The Git branch, tag or commit. For the layers from the OpenEmbedded layer source, the revision is always the branch compatible with the Yocto Project version you selected for this project",
+ hidden=True,
+ field_name="layer_version__get_vcs_reference")
+
+ wrtemplate_file_template = '''<code>conf/machine/{{data.name}}.conf</code>
+ <a href="{{data.get_vcs_machine_file_link_url}}" target="_blank"><span class="glyphicon glyphicon-new-window"></i></a>'''
+
+ self.add_column(title="Distro file",
+ hidden=True,
+ static_data_name="templatefile",
+ static_data_template=wrtemplate_file_template)
+
+
+ self.add_column(title="Select",
+ help_text="Sets the selected distro to the project",
+ hideable=False,
+ filter_name="in_current_project",
+ static_data_name="add-del-layers",
+ static_data_template='{% include "distro_btn.html" %}')
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html
index 32b4979..4f72064 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html
@@ -49,6 +49,7 @@
recipesTypeAheadUrl: {% url 'xhr_recipestypeahead' project.id as paturl%}{{paturl|json}},
layersTypeAheadUrl: {% url 'xhr_layerstypeahead' project.id as paturl%}{{paturl|json}},
machinesTypeAheadUrl: {% url 'xhr_machinestypeahead' project.id as paturl%}{{paturl|json}},
+ distrosTypeAheadUrl: {% url 'xhr_distrostypeahead' project.id as paturl%}{{paturl|json}},
projectBuildsUrl: {% url 'projectbuilds' project.id as pburl %}{{pburl|json}},
xhrCustomRecipeUrl : "{% url 'xhr_customrecipe' %}",
projectId : {{project.id}},
@@ -109,6 +110,7 @@
All builds
</a>
</li>
+ {% if project_enable %}
<li id="navbar-all-projects"
{% if request.resolver_match.url_name == 'all-projects' %}
class="active"
@@ -118,6 +120,7 @@
All projects
</a>
</li>
+ {% endif %}
{% endif %}
<li id="navbar-docs">
<a target="_blank" href="http://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html">
@@ -126,7 +129,9 @@
</a>
</li>
</ul>
+ {% if project_enable %}
<a class="btn btn-default navbar-btn navbar-right" id="new-project-button" href="{% url 'newproject' %}">New project</a>
+ {% endif %}
</div>
</div>
</nav>
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html
index 8427d25..f2bb2eb 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html
@@ -32,6 +32,7 @@
<li><a href="{% url 'projectsoftwarerecipes' project.id %}">Software recipes</a></li>
<li><a href="{% url 'projectmachines' project.id %}">Machines</a></li>
<li><a href="{% url 'projectlayers' project.id %}">Layers</a></li>
+ <li><a href="{% url 'projectdistros' project.id %}">Distros</a></li>
<li class="nav-header">Extra configuration</li>
<li><a href="{% url 'projectconf' project.id %}">BitBake variables</a></li>
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/distro_btn.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/distro_btn.html
new file mode 100644
index 0000000..fac7947
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/distro_btn.html
@@ -0,0 +1,20 @@
+<a href="{% url 'project' extra.pid %}?setDistro={{data.name}}" class="btn btn-default btn-block layer-exists-{{data.layer_version.id}}"
+ {% if data.layer_version.pk not in extra.current_layers %}
+ style="display:none;"
+ {% endif %}>
+ Set distro</a>
+<a class="btn btn-default btn-block layerbtn layer-add-{{data.layer_version.id}}" data-layer='{
+ "id": {{data.layer_version.id}},
+ "name": "{{data.layer_version.layer.name}}",
+ "xhrLayerUrl": "{% url "xhr_layer" extra.pid data.pk %}",
+ "layerdetailurl": "{%url 'layerdetails' extra.pid data.layer_version.id %}"
+ }' data-directive="add"
+ {% if data.layer_version.pk in extra.current_layers %}
+ style="display:none;"
+ {% endif %}
+>
+ <span class="glyphicon glyphicon-plus"></span>
+ Add layer
+ <span class="glyphicon glyphicon-question-sign get-help" title="To select this distro, you must first add the {{data.layer_version.layer.name}} layer to your project"></i>
+</a>
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/health.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/health.html
new file mode 100644
index 0000000..f17fdbc
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/health.html
@@ -0,0 +1,6 @@
+<!DOCTYPE html>
+<html lang="en">
+ <head><title>Toaster Health</title></head>
+ <body>Ok</body>
+</html>
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html
index 4986632..70c7359 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html
@@ -14,12 +14,20 @@
<p>A web interface to <a href="http://www.openembedded.org">OpenEmbedded</a> and <a href="http://www.yoctoproject.org/tools-resources/projects/bitbake">BitBake</a>, the <a href="http://www.yoctoproject.org">Yocto Project</a> build system.</p>
+ <p class="top-air">
+ <a class="btn btn-info btn-lg" href="http://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html#toaster-manual-setup-and-use">
+ Toaster is ready to capture your command line builds
+ </a>
+ </p>
+
{% if lvs_nos %}
+ {% if project_enable %}
<p class="top-air">
<a class="btn btn-primary btn-lg" href="{% url 'newproject' %}">
- To start building, create your first Toaster project
+ Create your first Toaster project to run manage builds
</a>
</p>
+ {% endif %}
{% else %}
<div class="alert alert-info lead top-air">
Toaster has no layer information. Without layer information, you cannot run builds. To generate layer information you can:
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html
index b761ffe..c5b9fe9 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html
@@ -64,7 +64,9 @@
</div>
<div data-build-state="<%:state%>">
- <%if state == 'Parsing'%>
+ <%if state == 'Cloning'%>
+ <%include tmpl='#cloning-repos-build-template'/%>
+ <%else state == 'Parsing'%>
<%include tmpl='#parsing-recipes-build-template'/%>
<%else state == 'Queued'%>
<%include tmpl='#queued-build-template'/%>
@@ -98,6 +100,31 @@
</div>
</script>
+<!-- cloning repos build -->
+<script id="cloning-repos-build-template" type="text/x-jsrender">
+ <!-- progress bar and parse completion percentage -->
+ <div data-role="build-status" class="col-md-4 col-md-offset-1 progress-info">
+ <!-- progress bar -->
+ <div class="progress">
+ <div id="repos-cloned-percentage-bar-<%:id%>"
+ style="width: <%:repos_cloned_percentage%>%;"
+ class="progress-bar">
+ </div>
+ </div>
+ </div>
+
+ <div class="col-md-4 progress-info">
+ <!-- parse completion percentage -->
+ <span class="glyphicon glyphicon-question-sign get-help get-help-blue"
+ title="Toaster is cloning the repos required for your build">
+ </span>
+
+ Cloning <span id="repos-cloned-percentage-<%:id%>"><%:repos_cloned_percentage%></span>% complete
+
+ <%include tmpl='#cancel-template'/%>
+ </div>
+</script>
+
<!-- parsing recipes build -->
<script id="parsing-recipes-build-template" type="text/x-jsrender">
<!-- progress bar and parse completion percentage -->
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html
index ab7e665..11603d1 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html
@@ -77,6 +77,22 @@
</form>
</div>
+ <div class="well well-transparent" id="distro-section">
+ <h3>Distro</h3>
+
+ <p class="lead"><span id="project-distro-name"></span> <span class="glyphicon glyphicon-edit" id="change-distro-toggle"></span></p>
+
+ <form id="select-distro-form" style="display:none;" class="form-inline">
+ <span class="help-block">Distro suggestions come from the Layer Index</a></span>
+ <div class="form-group">
+ <input class="form-control" id="distro-change-input" autocomplete="off" value="" data-provide="typeahead" data-minlength="1" data-autocomplete="off" type="text">
+ </div>
+ <button id="distro-change-btn" class="btn btn-default" type="button">Save</button>
+ <a href="#" id="cancel-distro-change" class="btn btn-link">Cancel</a>
+ <p class="form-link"><a href="{% url 'projectdistros' project.id %}">View compatible distros</a></p>
+ </form>
+ </div>
+
<div class="well well-transparent">
<h3>Most built recipes</h3>
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py
index 58c650f..5aa0f8d 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py
@@ -100,6 +100,36 @@
return results
+class DistrosTypeAhead(ToasterTypeAhead):
+ """ Typeahead for all the distros available in the current project's
+ configuration """
+ def __init__(self):
+ super(DistrosTypeAhead, self).__init__()
+
+ def apply_search(self, search_term, prj, request):
+ distros = prj.get_available_distros()
+ distros = distros.order_by("name")
+
+ primary_results = distros.filter(name__istartswith=search_term)
+ secondary_results = distros.filter(name__icontains=search_term).exclude(pk__in=primary_results)
+ tertiary_results = distros.filter(layer_version__layer__name__icontains=search_term).exclude(pk__in=primary_results).exclude(pk__in=secondary_results)
+
+ results = []
+
+ for distro in list(primary_results) + list(secondary_results) + list(tertiary_results):
+
+ detail = "[ %s ]" % (distro.layer_version.layer.name)
+ needed_fields = {
+ 'id' : distro.pk,
+ 'name' : distro.name,
+ 'detail' : detail,
+ }
+
+ results.append(needed_fields)
+
+ return results
+
+
class RecipesTypeAhead(ToasterTypeAhead):
""" Typeahead for all the recipes available in the current project's
configuration """
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py
index d92f190..e07b0ef 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py
@@ -1,7 +1,7 @@
#
# BitBake Toaster Implementation
#
-# Copyright (C) 2013 Intel Corporation
+# Copyright (C) 2013-2017 Intel Corporation
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 2 as
@@ -16,7 +16,7 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-from django.conf.urls import patterns, include, url
+from django.conf.urls import include, url
from django.views.generic import RedirectView, TemplateView
from django.http import HttpResponseBadRequest
@@ -25,49 +25,50 @@
from toastergui import typeaheads
from toastergui import api
from toastergui import widgets
+from toastergui import views
-urlpatterns = patterns('toastergui.views',
+urlpatterns = [
# landing page
- url(r'^landing/$', 'landing', name='landing'),
+ url(r'^landing/$', views.landing, name='landing'),
url(r'^builds/$',
tables.AllBuildsTable.as_view(template_name="builds-toastertable.html"),
name='all-builds'),
# build info navigation
- url(r'^build/(?P<build_id>\d+)$', 'builddashboard', name="builddashboard"),
+ url(r'^build/(?P<build_id>\d+)$', views.builddashboard, name="builddashboard"),
url(r'^build/(?P<build_id>\d+)/tasks/$',
buildtables.BuildTasksTable.as_view(
template_name="buildinfo-toastertable.html"),
name='tasks'),
- url(r'^build/(?P<build_id>\d+)/task/(?P<task_id>\d+)$', 'task', name='task'),
+ url(r'^build/(?P<build_id>\d+)/task/(?P<task_id>\d+)$', views.task, name='task'),
url(r'^build/(?P<build_id>\d+)/recipes/$',
buildtables.BuiltRecipesTable.as_view(
template_name="buildinfo-toastertable.html"),
name='recipes'),
- url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)/active_tab/(?P<active_tab>\d{1})$', 'recipe', name='recipe'),
+ url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)/active_tab/(?P<active_tab>\d{1})$', views.recipe, name='recipe'),
- url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)$', 'recipe', name='recipe'),
- url(r'^build/(?P<build_id>\d+)/recipe_packages/(?P<recipe_id>\d+)$', 'recipe_packages', name='recipe_packages'),
+ url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)$', views.recipe, name='recipe'),
+ url(r'^build/(?P<build_id>\d+)/recipe_packages/(?P<recipe_id>\d+)$', views.recipe_packages, name='recipe_packages'),
url(r'^build/(?P<build_id>\d+)/packages/$',
buildtables.BuiltPackagesTable.as_view(
template_name="buildinfo-toastertable.html"),
name='packages'),
- url(r'^build/(?P<build_id>\d+)/package/(?P<package_id>\d+)$', 'package_built_detail',
+ url(r'^build/(?P<build_id>\d+)/package/(?P<package_id>\d+)$', views.package_built_detail,
name='package_built_detail'),
url(r'^build/(?P<build_id>\d+)/package_built_dependencies/(?P<package_id>\d+)$',
- 'package_built_dependencies', name='package_built_dependencies'),
+ views.package_built_dependencies, name='package_built_dependencies'),
url(r'^build/(?P<build_id>\d+)/package_included_detail/(?P<target_id>\d+)/(?P<package_id>\d+)$',
- 'package_included_detail', name='package_included_detail'),
+ views.package_included_detail, name='package_included_detail'),
url(r'^build/(?P<build_id>\d+)/package_included_dependencies/(?P<target_id>\d+)/(?P<package_id>\d+)$',
- 'package_included_dependencies', name='package_included_dependencies'),
+ views.package_included_dependencies, name='package_included_dependencies'),
url(r'^build/(?P<build_id>\d+)/package_included_reverse_dependencies/(?P<target_id>\d+)/(?P<package_id>\d+)$',
- 'package_included_reverse_dependencies', name='package_included_reverse_dependencies'),
+ views.package_included_reverse_dependencies, name='package_included_reverse_dependencies'),
url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)$',
buildtables.InstalledPackagesTable.as_view(
@@ -75,11 +76,11 @@
name='target'),
- url(r'^dentries/build/(?P<build_id>\d+)/target/(?P<target_id>\d+)$', 'xhr_dirinfo', name='dirinfo_ajax'),
- url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo$', 'dirinfo', name='dirinfo'),
- url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo_filepath/_(?P<file_path>(?:/[^/\n]+)*)$', 'dirinfo', name='dirinfo_filepath'),
- url(r'^build/(?P<build_id>\d+)/configuration$', 'configuration', name='configuration'),
- url(r'^build/(?P<build_id>\d+)/configvars$', 'configvars', name='configvars'),
+ url(r'^dentries/build/(?P<build_id>\d+)/target/(?P<target_id>\d+)$', views.xhr_dirinfo, name='dirinfo_ajax'),
+ url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo$', views.dirinfo, name='dirinfo'),
+ url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo_filepath/_(?P<file_path>(?:/[^/\n]+)*)$', views.dirinfo, name='dirinfo_filepath'),
+ url(r'^build/(?P<build_id>\d+)/configuration$', views.configuration, name='configuration'),
+ url(r'^build/(?P<build_id>\d+)/configvars$', views.configvars, name='configvars'),
url(r'^build/(?P<build_id>\d+)/buildtime$',
buildtables.BuildTimeTable.as_view(
template_name="buildinfo-toastertable.html"),
@@ -97,26 +98,26 @@
# image information dir
url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/packagefile/(?P<packagefile_id>\d+)$',
- 'image_information_dir', name='image_information_dir'),
+ views.image_information_dir, name='image_information_dir'),
# build download artifact
- url(r'^build/(?P<build_id>\d+)/artifact/(?P<artifact_type>\w+)/id/(?P<artifact_id>\w+)', 'build_artifact', name="build_artifact"),
+ url(r'^build/(?P<build_id>\d+)/artifact/(?P<artifact_type>\w+)/id/(?P<artifact_id>\w+)', views.build_artifact, name="build_artifact"),
# project URLs
- url(r'^newproject/$', 'newproject', name='newproject'),
+ url(r'^newproject/$', views.newproject, name='newproject'),
url(r'^projects/$',
tables.ProjectsTable.as_view(template_name="projects-toastertable.html"),
name='all-projects'),
- url(r'^project/(?P<pid>\d+)/$', 'project', name='project'),
- url(r'^project/(?P<pid>\d+)/configuration$', 'projectconf', name='projectconf'),
+ url(r'^project/(?P<pid>\d+)/$', views.project, name='project'),
+ url(r'^project/(?P<pid>\d+)/configuration$', views.projectconf, name='projectconf'),
url(r'^project/(?P<pid>\d+)/builds/$',
tables.ProjectBuildsTable.as_view(template_name="projectbuilds-toastertable.html"),
name='projectbuilds'),
# the import layer is a project-specific functionality;
- url(r'^project/(?P<pid>\d+)/importlayer$', 'importlayer', name='importlayer'),
+ url(r'^project/(?P<pid>\d+)/importlayer$', views.importlayer, name='importlayer'),
# the table pages that have been converted to ToasterTable widget
url(r'^project/(?P<pid>\d+)/machines/$',
@@ -142,7 +143,7 @@
name="projectlayers"),
url(r'^project/(?P<pid>\d+)/layer/(?P<layerid>\d+)$',
- 'layerdetails', name='layerdetails'),
+ views.layerdetails, name='layerdetails'),
url(r'^project/(?P<pid>\d+)/layer/(?P<layerid>\d+)/recipes/$',
tables.LayerRecipesTable.as_view(template_name="generic-toastertable-page.html"),
@@ -157,6 +158,11 @@
name=tables.LayerMachinesTable.__name__.lower()),
+ url(r'^project/(?P<pid>\d+)/distros/$',
+ tables.DistrosTable.as_view(template_name="generic-toastertable-page.html"),
+ name="projectdistros"),
+
+
url(r'^project/(?P<pid>\d+)/customrecipe/(?P<custrecipeid>\d+)/selectpackages/$',
tables.SelectPackagesTable.as_view(), name="recipeselectpackages"),
@@ -166,7 +172,7 @@
name="customrecipe"),
url(r'^project/(?P<pid>\d+)/customrecipe/(?P<recipe_id>\d+)/download$',
- 'customrecipe_download',
+ views.customrecipe_download,
name="customrecipedownload"),
url(r'^project/(?P<pid>\d+)/recipe/(?P<recipe_id>\d+)$',
@@ -186,9 +192,12 @@
typeaheads.GitRevisionTypeAhead.as_view(),
name='xhr_gitrevtypeahead'),
- url(r'^xhr_testreleasechange/(?P<pid>\d+)$', 'xhr_testreleasechange',
+ url(r'^xhr_typeahead/(?P<pid>\d+)/distros$',
+ typeaheads.DistrosTypeAhead.as_view(), name='xhr_distrostypeahead'),
+
+ url(r'^xhr_testreleasechange/(?P<pid>\d+)$', views.xhr_testreleasechange,
name='xhr_testreleasechange'),
- url(r'^xhr_configvaredit/(?P<pid>\d+)$', 'xhr_configvaredit',
+ url(r'^xhr_configvaredit/(?P<pid>\d+)$', views.xhr_configvaredit,
name='xhr_configvaredit'),
url(r'^xhr_layer/(?P<pid>\d+)/(?P<layerversion_id>\d+)$',
@@ -200,7 +209,7 @@
name='xhr_layer'),
# JS Unit tests
- url(r'^js-unit-tests/$', 'jsunittests', name='js-unit-tests'),
+ url(r'^js-unit-tests/$', views.jsunittests, name='js-unit-tests'),
# image customisation functionality
url(r'^xhr_customrecipe/(?P<recipe_id>\d+)'
@@ -235,6 +244,11 @@
url(r'^mostrecentbuilds$', widgets.MostRecentBuildsView.as_view(),
name='most_recent_builds'),
- # default redirection
+ # JSON data for aggregators
+ url(r'^api/builds$', views.json_builds, name='json_builds'),
+ url(r'^api/building$', views.json_building, name='json_building'),
+ url(r'^api/build/(?P<build_id>\d+)$', views.json_build, name='json_build'),
+
+ # default redirection
url(r'^$', RedirectView.as_view(url='landing', permanent=True)),
-)
+]
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py
index 75c5911..34ed2b2 100755
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py
@@ -35,7 +35,7 @@
from django.core.urlresolvers import reverse, resolve
from django.core.exceptions import MultipleObjectsReturned, ObjectDoesNotExist
from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
-from django.http import HttpResponseNotFound
+from django.http import HttpResponseNotFound, JsonResponse
from django.utils import timezone
from datetime import timedelta, datetime
from toastergui.templatetags.projecttags import json as jsonfilter
@@ -49,6 +49,8 @@
logger = logging.getLogger("toaster")
+# Project creation and managed build enable
+project_enable = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
class MimeTypeFinder(object):
# setting this to False enables additional non-standard mimetypes
@@ -65,6 +67,12 @@
guessed_type = 'application/octet-stream'
return guessed_type
+# single point to add global values into the context before rendering
+def toaster_render(request, page, context):
+ context['project_enable'] = project_enable
+ return render(request, page, context)
+
+
# all new sessions should come through the landing page;
# determine in which mode we are running in, and redirect appropriately
def landing(request):
@@ -86,7 +94,7 @@
context = {'lvs_nos' : Layer_Version.objects.all().count()}
- return render(request, 'landing.html', context)
+ return toaster_render(request, 'landing.html', context)
def objtojson(obj):
from django.db.models.query import QuerySet
@@ -277,7 +285,7 @@
return None, invalid + str(field_input_list)
# Check we are looking for a valid field
- valid_fields = model._meta.get_all_field_names()
+ valid_fields = [f.name for f in model._meta.get_fields()]
for field in field_input_list[0].split(AND_VALUE_SEPARATOR):
if True in [field.startswith(x) for x in valid_fields]:
break
@@ -457,10 +465,15 @@
npkg = 0
pkgsz = 0
package = None
- for package in Package.objects.filter(id__in = [x.package_id for x in t.target_installed_package_set.all()]):
- pkgsz = pkgsz + package.size
- if package.installed_name:
- npkg = npkg + 1
+ # Chunk the query to avoid "too many SQL variables" error
+ package_set = t.target_installed_package_set.all()
+ package_set_len = len(package_set)
+ for ps_start in range(0,package_set_len,500):
+ ps_stop = min(ps_start+500,package_set_len)
+ for package in Package.objects.filter(id__in = [x.package_id for x in package_set[ps_start:ps_stop]]):
+ pkgsz = pkgsz + package.size
+ if package.installed_name:
+ npkg = npkg + 1
elem['npkg'] = npkg
elem['pkgsz'] = pkgsz
ti = Target_Image_File.objects.filter(target_id = t.id)
@@ -514,7 +527,7 @@
'packagecount' : packageCount,
'logmessages' : logmessages,
}
- return render( request, template, context )
+ return toaster_render( request, template, context )
@@ -586,7 +599,7 @@
build__completed_on__lt=task_object.build.completed_on).exclude(
order__isnull=True).exclude(outcome=Task.OUTCOME_NA).order_by('-build__completed_on')
- return render( request, template, context )
+ return toaster_render( request, template, context )
def recipe(request, build_id, recipe_id, active_tab="1"):
template = "recipe.html"
@@ -613,7 +626,7 @@
'package_count' : package_count,
'tab_states' : tab_states,
}
- return render(request, template, context)
+ return toaster_render(request, template, context)
def recipe_packages(request, build_id, recipe_id):
template = "recipe_packages.html"
@@ -658,7 +671,7 @@
},
]
}
- response = render(request, template, context)
+ response = toaster_render(request, template, context)
_set_parameters_values(pagesize, orderby, request)
return response
@@ -780,7 +793,7 @@
'dir_list': dir_list,
'file_path': file_path,
}
- return render(request, template, context)
+ return toaster_render(request, template, context)
def _find_task_dep(task_object):
tdeps = Task_Dependency.objects.filter(task=task_object).filter(depends_on__order__gt=0)
@@ -832,7 +845,7 @@
'build': build,
'project': build.project,
'targets': Target.objects.filter(build=build_id)})
- return render(request, template, context)
+ return toaster_render(request, template, context)
def configvars(request, build_id):
@@ -921,7 +934,7 @@
],
}
- response = render(request, template, context)
+ response = toaster_render(request, template, context)
_set_parameters_values(pagesize, orderby, request)
return response
@@ -934,7 +947,7 @@
'project': build.project,
'objects' : files
}
- return render(request, template, context)
+ return toaster_render(request, template, context)
# A set of dependency types valid for both included and built package views
@@ -1087,7 +1100,7 @@
if paths.all().count() < 2:
context['disable_sort'] = True;
- response = render(request, template, context)
+ response = toaster_render(request, template, context)
_set_parameters_values(pagesize, orderby, request)
return response
@@ -1106,7 +1119,7 @@
'other_deps' : dependencies['other_deps'],
'dependency_count' : _get_package_dependency_count(package, -1, False)
}
- return render(request, template, context)
+ return toaster_render(request, template, context)
def package_included_detail(request, build_id, target_id, package_id):
@@ -1152,7 +1165,7 @@
}
if paths.all().count() < 2:
context['disable_sort'] = True
- response = render(request, template, context)
+ response = toaster_render(request, template, context)
_set_parameters_values(pagesize, orderby, request)
return response
@@ -1176,7 +1189,7 @@
'reverse_count' : _get_package_reverse_dep_count(package, target_id),
'dependency_count' : _get_package_dependency_count(package, target_id, True)
}
- return render(request, template, context)
+ return toaster_render(request, template, context)
def package_included_reverse_dependencies(request, build_id, target_id, package_id):
template = "package_included_reverse_dependencies.html"
@@ -1227,7 +1240,7 @@
}
if objects.all().count() < 2:
context['disable_sort'] = True
- response = render(request, template, context)
+ response = toaster_render(request, template, context)
_set_parameters_values(pagesize, orderby, request)
return response
@@ -1251,6 +1264,89 @@
}
return ret
+# REST-based API calls to return build/building status to external Toaster
+# managers and aggregators via JSON
+
+def _json_build_status(build_id,extend):
+ build_stat = None
+ try:
+ build = Build.objects.get( pk = build_id )
+ build_stat = {}
+ build_stat['id'] = build.id
+ build_stat['name'] = build.build_name
+ build_stat['machine'] = build.machine
+ build_stat['distro'] = build.distro
+ build_stat['start'] = build.started_on
+ # look up target name
+ target= Target.objects.get( build = build )
+ if target:
+ if target.task:
+ build_stat['target'] = '%s:%s' % (target.target,target.task)
+ else:
+ build_stat['target'] = '%s' % (target.target)
+ else:
+ build_stat['target'] = ''
+ # look up project name
+ project = Project.objects.get( build = build )
+ if project:
+ build_stat['project'] = project.name
+ else:
+ build_stat['project'] = ''
+ if Build.IN_PROGRESS == build.outcome:
+ now = timezone.now()
+ timediff = now - build.started_on
+ build_stat['seconds']='%.3f' % timediff.total_seconds()
+ build_stat['clone']='%d:%d' % (build.repos_cloned,build.repos_to_clone)
+ build_stat['parse']='%d:%d' % (build.recipes_parsed,build.recipes_to_parse)
+ tf = Task.objects.filter(build = build)
+ tfc = tf.count()
+ if tfc > 0:
+ tfd = tf.exclude(order__isnull=True).count()
+ else:
+ tfd = 0
+ build_stat['task']='%d:%d' % (tfd,tfc)
+ else:
+ build_stat['outcome'] = build.get_outcome_text()
+ timediff = build.completed_on - build.started_on
+ build_stat['seconds']='%.3f' % timediff.total_seconds()
+ build_stat['stop'] = build.completed_on
+ messages = LogMessage.objects.all().filter(build = build)
+ errors = len(messages.filter(level=LogMessage.ERROR) |
+ messages.filter(level=LogMessage.EXCEPTION) |
+ messages.filter(level=LogMessage.CRITICAL))
+ build_stat['errors'] = errors
+ warnings = len(messages.filter(level=LogMessage.WARNING))
+ build_stat['warnings'] = warnings
+ if extend:
+ build_stat['cooker_log'] = build.cooker_log_path
+ except Exception as e:
+ build_state = str(e)
+ return build_stat
+
+def json_builds(request):
+ build_table = []
+ builds = []
+ try:
+ builds = Build.objects.exclude(outcome=Build.IN_PROGRESS).order_by("-started_on")
+ for build in builds:
+ build_table.append(_json_build_status(build.id,False))
+ except Exception as e:
+ build_table = str(e)
+ return JsonResponse({'builds' : build_table, 'count' : len(builds)})
+
+def json_building(request):
+ build_table = []
+ builds = []
+ try:
+ builds = Build.objects.filter(outcome=Build.IN_PROGRESS).order_by("-started_on")
+ for build in builds:
+ build_table.append(_json_build_status(build.id,False))
+ except Exception as e:
+ build_table = str(e)
+ return JsonResponse({'building' : build_table, 'count' : len(builds)})
+
+def json_build(request,build_id):
+ return JsonResponse({'build' : _json_build_status(build_id,True)})
import toastermain.settings
@@ -1277,6 +1373,9 @@
# new project
def newproject(request):
+ if not project_enable:
+ return redirect( landing )
+
template = "newproject.html"
context = {
'email': request.user.email if request.user.is_authenticated() else '',
@@ -1291,7 +1390,7 @@
if request.method == "GET":
# render new project page
- return render(request, template, context)
+ return toaster_render(request, template, context)
elif request.method == "POST":
mandatory_fields = ['projectname', 'ptype']
try:
@@ -1331,7 +1430,7 @@
context['alert'] = "Your chosen username is already used"
else:
context['alert'] = str(e)
- return render(request, template, context)
+ return toaster_render(request, template, context)
raise Exception("Invalid HTTP method for this page")
@@ -1339,7 +1438,7 @@
def project(request, pid):
project = Project.objects.get(pk=pid)
context = {"project": project}
- return render(request, "project.html", context)
+ return toaster_render(request, "project.html", context)
def jsunittests(request):
""" Provides a page for the js unit tests """
@@ -1365,7 +1464,7 @@
name="MACHINE",
value="qemux86")
context = {'project': new_project}
- return render(request, "js-unit-tests.html", context)
+ return toaster_render(request, "js-unit-tests.html", context)
from django.views.decorators.csrf import csrf_exempt
@csrf_exempt
@@ -1500,7 +1599,7 @@
context = {
'project': Project.objects.get(id=pid),
}
- return render(request, template, context)
+ return toaster_render(request, template, context)
def layerdetails(request, pid, layerid):
project = Project.objects.get(pk=pid)
@@ -1529,7 +1628,7 @@
'projectlayers': list(project_layers)
}
- return render(request, 'layerdetails.html', context)
+ return toaster_render(request, 'layerdetails.html', context)
def get_project_configvars_context():
@@ -1619,7 +1718,7 @@
except (ProjectVariable.DoesNotExist, BuildEnvironment.DoesNotExist):
pass
- return render(request, "projectconf.html", context)
+ return toaster_render(request, "projectconf.html", context)
def _file_names_for_artifact(build, artifact_type, artifact_id):
"""
@@ -1686,6 +1785,7 @@
return response
else:
- return render(request, "unavailable_artifact.html")
+ return toaster_render(request, "unavailable_artifact.html")
except (ObjectDoesNotExist, IOError):
- return render(request, "unavailable_artifact.html")
+ return toaster_render(request, "unavailable_artifact.html")
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py
index 6b7b981..a1792d9 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py
@@ -41,6 +41,7 @@
import json
import collections
import re
+import os
from toastergui.tablefilter import TableFilterMap
@@ -86,6 +87,9 @@
context['table_name'] = type(self).__name__.lower()
context['empty_state'] = self.empty_state
+ # global variables
+ context['project_enable'] = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
+
return context
def get(self, request, *args, **kwargs):
@@ -511,6 +515,10 @@
int((build_obj.recipes_parsed /
build_obj.recipes_to_parse) * 100)
+ build['repos_cloned_percentage'] = \
+ int((build_obj.repos_cloned /
+ build_obj.repos_to_clone) * 100)
+
tasks_complete_percentage = 0
if build_obj.outcome in (Build.SUCCEEDED, Build.FAILED):
tasks_complete_percentage = 100
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py
index 8dfef0a..70b5812 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py
@@ -1,13 +1,13 @@
-from django.core.management.base import NoArgsCommand, CommandError
+from django.core.management.base import BaseCommand, CommandError
from orm.models import Build
import os
-class Command(NoArgsCommand):
+class Command(BaseCommand):
args = ""
help = "Lists current builds"
- def handle_noargs(self,**options):
+ def handle(self,**options):
for b in Build.objects.all():
print("%d: %s %s %s" % (b.pk, b.machine, b.distro, ",".join([x.target for x in b.target_set.all()])))
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py
index 1fd649c..13541d3 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py
@@ -24,7 +24,6 @@
import os
DEBUG = True
-TEMPLATE_DEBUG = DEBUG
# Set to True to see the SQL queries in console
SQL_DEBUG = False
@@ -161,12 +160,47 @@
# Make this unique, and don't share it with anybody.
SECRET_KEY = 'NOT_SUITABLE_FOR_HOSTED_DEPLOYMENT'
-# List of callables that know how to import templates from various sources.
-TEMPLATE_LOADERS = (
- 'django.template.loaders.filesystem.Loader',
- 'django.template.loaders.app_directories.Loader',
-# 'django.template.loaders.eggs.Loader',
-)
+class InvalidString(str):
+ def __mod__(self, other):
+ from django.template.base import TemplateSyntaxError
+ raise TemplateSyntaxError(
+ "Undefined variable or unknown value for: \"%s\"" % other)
+
+TEMPLATES = [
+ {
+ 'BACKEND': 'django.template.backends.django.DjangoTemplates',
+ 'DIRS': [
+ # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
+ # Always use forward slashes, even on Windows.
+ # Don't forget to use absolute paths, not relative paths.
+ ],
+ 'OPTIONS': {
+ 'context_processors': [
+ # Insert your TEMPLATE_CONTEXT_PROCESSORS here or use this
+ # list if you haven't customized them:
+ 'django.contrib.auth.context_processors.auth',
+ 'django.template.context_processors.debug',
+ 'django.template.context_processors.i18n',
+ 'django.template.context_processors.media',
+ 'django.template.context_processors.static',
+ 'django.template.context_processors.tz',
+ 'django.contrib.messages.context_processors.messages',
+ # Custom
+ 'django.template.context_processors.request',
+ 'toastergui.views.managedcontextprocessor',
+
+ ],
+ 'loaders': [
+ # List of callables that know how to import templates from various sources.
+ 'django.template.loaders.filesystem.Loader',
+ 'django.template.loaders.app_directories.Loader',
+ #'django.template.loaders.eggs.Loader',
+ ],
+ 'string_if_invalid': InvalidString("%s"),
+ 'debug': DEBUG,
+ },
+ },
+]
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
@@ -203,22 +237,6 @@
# Python dotted path to the WSGI application used by Django's runserver.
WSGI_APPLICATION = 'toastermain.wsgi.application'
-TEMPLATE_DIRS = (
- # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
- # Always use forward slashes, even on Windows.
- # Don't forget to use absolute paths, not relative paths.
-)
-
-TEMPLATE_CONTEXT_PROCESSORS = ('django.contrib.auth.context_processors.auth',
- 'django.core.context_processors.debug',
- 'django.core.context_processors.i18n',
- 'django.core.context_processors.media',
- 'django.core.context_processors.static',
- 'django.core.context_processors.tz',
- 'django.contrib.messages.context_processors.messages',
- "django.core.context_processors.request",
- 'toastergui.views.managedcontextprocessor',
- )
INSTALLED_APPS = (
'django.contrib.auth',
@@ -348,10 +366,4 @@
#
-class InvalidString(str):
- def __mod__(self, other):
- from django.template.base import TemplateSyntaxError
- raise TemplateSyntaxError(
- "Undefined variable or unknown value for: \"%s\"" % other)
-TEMPLATE_STRING_IF_INVALID = InvalidString("%s")
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py
index 1f8599e..e2fb0ae 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py
@@ -19,9 +19,10 @@
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-from django.conf.urls import patterns, include, url
-from django.views.generic import RedirectView
+from django.conf.urls import include, url
+from django.views.generic import RedirectView, TemplateView
from django.views.decorators.cache import never_cache
+import bldcollector.views
import logging
@@ -31,7 +32,7 @@
from django.contrib import admin
admin.autodiscover()
-urlpatterns = patterns('',
+urlpatterns = [
# Examples:
# url(r'^toaster/', include('toaster.foo.urls')),
@@ -42,11 +43,13 @@
# This is here to maintain backward compatibility and will be deprecated
# in the future.
- url(r'^orm/eventfile$', 'bldcollector.views.eventfile'),
+ url(r'^orm/eventfile$', bldcollector.views.eventfile),
+
+ url(r'^health$', TemplateView.as_view(template_name="health.html"), name='Toaster Health'),
# if no application is selected, we have the magic toastergui app here
url(r'^$', never_cache(RedirectView.as_view(url='/toastergui/', permanent=True))),
-)
+]
import toastermain.settings
diff --git a/import-layers/yocto-poky/bitbake/toaster-requirements.txt b/import-layers/yocto-poky/bitbake/toaster-requirements.txt
index e61c8e2..c0ec368 100644
--- a/import-layers/yocto-poky/bitbake/toaster-requirements.txt
+++ b/import-layers/yocto-poky/bitbake/toaster-requirements.txt
@@ -1,3 +1,3 @@
-Django>1.8,<1.9
+Django>1.8,<1.11.9
beautifulsoup4>=4.4.0
pytz
diff --git a/import-layers/yocto-poky/documentation/Makefile b/import-layers/yocto-poky/documentation/Makefile
index 9077c81..9891095 100644
--- a/import-layers/yocto-poky/documentation/Makefile
+++ b/import-layers/yocto-poky/documentation/Makefile
@@ -8,7 +8,7 @@
# system. These manuals also include an "eclipse" sub-directory as part of
# the make process.
#
-# Note that the figures for the Yocto Project Development Manual
+# Note that the figures for the Yocto Project Development Tasks Manual
# differ depending on the BRANCH being built.
#
# The Makefile has these targets:
@@ -42,7 +42,7 @@
# To build a manual, you must invoke Makefile with the DOC argument. If you
# are going to publish the manual, then you must invoke Makefile with both the
# DOC and the VER argument. Furthermore, if you are building or publishing
-# the edison or denzil versions of the Yocto Project Development Manual or
+# the edison or denzil versions of the Yocto Project Development Tasks Manual or
# the mega-manual, you must also use the BRANCH argument.
#
# Examples:
@@ -59,7 +59,7 @@
# 'make DOC=yocto-project-qs' command would be equivalent. The third example
# generates just the PDF version of the Yocto Project Reference Manual.
# The fourth example generates the HTML 'edison' version and (if available)
-# the Eclipse help version of the YP Development Manual. The last example
+# the Eclipse help version of the YP Development Tasks Manual. The last example
# generates the HTML version of the mega-manual and uses the 'denzil'
# branch when choosing figures for the tarball of figures. Any example that does
# not use the BRANCH argument builds the current version of the manual set.
@@ -79,15 +79,16 @@
# The first example publishes the 1.7 version of both the PDF and HTML versions of
# the BSP Guide. The second example publishes the 1.6 version of both the PDF and
# HTML versions of the ADT Manual. The third example publishes the 1.1.1 version of
-# the PDF and HTML YP Development Manual for the 'edison' branch. The fourth example
-# publishes the 1.2 version of the PDF and HTML YP Development Manual for the
-# 'denzil' branch.
+# the PDF and HTML YP Development Tasks Manual for the 'edison' branch. The fourth
+# example publishes the 1.2 version of the PDF and HTML YP Development Tasks Manual
+# for the 'denzil' branch.
#
ifeq ($(DOC),bsp-guide)
XSLTOPTS = --xinclude
ALLPREQ = html eclipse tarball
TARFILES = bsp-style.css bsp-guide.html figures/bsp-title.png \
+ figures/bsp-dev-flow.png \
eclipse
MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
FIGURES = figures
@@ -128,14 +129,8 @@
figures/wip.png
else
TARFILES = dev-style.css dev-manual.html \
- figures/bsp-dev-flow.png \
- figures/dev-title.png figures/git-workflow.png \
- figures/index-downloads.png figures/kernel-dev-flow.png \
- figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
- figures/source-repos.png figures/yp-download.png \
- figures/recipe-workflow.png \
- figures/devtool-add-flow.png figures/devtool-modify-flow.png \
- figures/devtool-upgrade-flow.png \
+ figures/dev-title.png \
+ figures/recipe-workflow.png figures/bitbake-build-flow.png \
eclipse
endif
@@ -148,7 +143,7 @@
ifeq ($(DOC),yocto-project-qs)
XSLTOPTS = --xinclude
ALLPREQ = html eclipse tarball
-TARFILES = yocto-project-qs.html qs-style.css figures/yocto-environment.png \
+TARFILES = yocto-project-qs.html qs-style.css \
figures/yocto-project-transp.png \
eclipse
MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
@@ -195,8 +190,8 @@
figures/source-repos.png figures/yp-download.png \
figures/wip.png
else
-TARFILES = mega-manual.html mega-style.css figures/yocto-environment.png \
- figures/building-an-image.png \
+TARFILES = mega-manual.html mega-style.css \
+ figures/building-an-image.png figures/YP-flow-diagram.png \
figures/using-a-pre-built-image.png \
figures/poky-title.png figures/buildhistory.png \
figures/buildhistory-web.png \
@@ -206,7 +201,7 @@
figures/dev-title.png \
figures/git-workflow.png figures/index-downloads.png \
figures/kernel-dev-flow.png \
- figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
+ figures/kernel-overview-2-generic.png \
figures/source-repos.png figures/yp-download.png \
figures/profile-title.png figures/kernelshark-all.png \
figures/kernelshark-choose-events.png \
@@ -244,13 +239,12 @@
figures/sdk-generation.png figures/recipe-workflow.png \
figures/build-workspace-directory.png figures/mega-title.png \
figures/toaster-title.png figures/hosted-service.png \
- figures/simple-configuration.png figures/devtool-add-flow.png \
- figures/devtool-modify-flow.png figures/devtool-upgrade-flow.png \
+ figures/simple-configuration.png \
figures/compatible-layers.png figures/import-layer.png figures/new-project.png \
figures/sdk-environment.png figures/sdk-installed-standard-sdk-directory.png \
figures/sdk-devtool-add-flow.png figures/sdk-installed-extensible-sdk-directory.png \
figures/sdk-devtool-modify-flow.png figures/sdk-eclipse-dev-flow.png \
- figures/sdk-devtool-upgrade-flow.png
+ figures/sdk-devtool-upgrade-flow.png figures/bitbake-build-flow.png
endif
MANUALS = $(DOC)/$(DOC).html
@@ -262,7 +256,7 @@
ifeq ($(DOC),ref-manual)
XSLTOPTS = --xinclude
ALLPREQ = html eclipse tarball
-TARFILES = ref-manual.html ref-style.css figures/poky-title.png \
+TARFILES = ref-manual.html ref-style.css figures/poky-title.png figures/YP-flow-diagram.png \
figures/buildhistory.png figures/buildhistory-web.png eclipse \
figures/cross-development-toolchains.png figures/layer-input.png \
figures/package-feeds.png figures/source-input.png \
@@ -271,7 +265,8 @@
figures/patching.png figures/configuration-compile-autoreconf.png \
figures/analysis-for-package-splitting.png figures/image-generation.png \
figures/sdk-generation.png figures/building-an-image.png \
- figures/build-workspace-directory.png
+ figures/build-workspace-directory.png figures/source-repos.png \
+ figures/index-downloads.png figures/yp-download.png figures/git-workflow.png
MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
FIGURES = figures
STYLESHEET = $(DOC)/*.css
@@ -330,8 +325,8 @@
XSLTOPTS = --xinclude
ALLPREQ = html eclipse tarball
TARFILES = kernel-dev.html kernel-dev-style.css \
- figures/kernel-dev-title.png \
- figures/kernel-architecture-overview.png \
+ figures/kernel-dev-title.png figures/kernel-overview-2-generic.png \
+ figures/kernel-architecture-overview.png figures/kernel-dev-flow.png \
eclipse
MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
FIGURES = figures
diff --git a/import-layers/yocto-poky/documentation/README b/import-layers/yocto-poky/documentation/README
index a4e70a8..d64f2fd 100644
--- a/import-layers/yocto-poky/documentation/README
+++ b/import-layers/yocto-poky/documentation/README
@@ -36,8 +36,8 @@
* sdk-manual - The Yocto Project Software Development Kit (SDK) Developer's Guide.
* bsp-guide - The Yocto Project Board Support Package (BSP) Developer's Guide
-* dev-manual - The Yocto Project Development Manual
-* kernel-dev - The Yocto Project Linux Kernel Development Manual
+* dev-manual - The Yocto Project Development Tasks Manual
+* kernel-dev - The Yocto Project Linux Kernel Development Tasks Manual
* ref-manual - The Yocto Project Reference Manual
* yocto-project-qs - The Yocto Project Quick Start
* mega-manual - The Yocto Project Mega-Manual, which is an aggregated manual comprised
diff --git a/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml b/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml
index f0ee399..576ed18 100644
--- a/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml
+++ b/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml
@@ -22,18 +22,11 @@
<authorgroup>
<author>
- <firstname>Saul</firstname> <surname>Wold</surname>
+ <firstname>Scott</firstname> <surname>Rifenbark</surname>
<affiliation>
- <orgname>Intel Corporation</orgname>
+ <orgname>Scotty's Documentation Services, INC</orgname>
</affiliation>
- <email>saul.wold@intel.com</email>
- </author>
- <author>
- <firstname>Richard</firstname> <surname>Purdie</surname>
- <affiliation>
- <orgname>Linux Foundation</orgname>
- </affiliation>
- <email>richard.purdie@linuxfoundation.org</email>
+ <email>srifenbark@gmail.com</email>
</author>
</authorgroup>
@@ -119,24 +112,19 @@
<revremark>Released with the Yocto Project 2.3 Release.</revremark>
</revision>
<revision>
- <revnumber>2.3.1</revnumber>
- <date>June 2017</date>
- <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+ <revnumber>2.4</revnumber>
+ <date>October 2017</date>
+ <revremark>Released with the Yocto Project 2.4 Release.</revremark>
</revision>
<revision>
- <revnumber>2.3.2</revnumber>
- <date>September 2017</date>
- <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
- </revision>
- <revision>
- <revnumber>2.3.3</revnumber>
+ <revnumber>2.4.1</revnumber>
<date>January 2018</date>
- <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+ <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
</revision>
<revision>
- <revnumber>2.3.4</revnumber>
- <date>April 2018</date>
- <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+ <revnumber>2.4.2</revnumber>
+ <date>March 2018</date>
+ <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
</revision>
</revhistory>
@@ -150,34 +138,34 @@
Permission is granted to copy, distribute and/or modify this document under
the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England & Wales</ulink> as published by Creative Commons.
</para>
- <note><title>Manual Notes</title>
- <itemizedlist>
- <listitem><para>
- For the latest version of the Yocto Project Board
- Support Package (BSP) Developer's Guide associated with
- this Yocto Project release (version
- &YOCTO_DOC_VERSION;),
- see the Yocto Project Board Support Package (BSP)
- Developer's Guide from the
- <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+ <note><title>Manual Notes</title>
+ <itemizedlist>
+ <listitem><para>
+ This version of the
+ <emphasis>Yocto Project Board Support Package Developer's Guide</emphasis>
+ is for the &YOCTO_DOC_VERSION; release of the
+ Yocto Project.
+ To be sure you have the latest version of the manual
+ for this release, use the manual from the
+ <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+ </para></listitem>
+ <listitem><para>
+ For manuals associated with other releases of the Yocto
+ Project, go to the
+ <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
+ and use the drop-down "Active Releases" button
+ and choose the manual associated with the desired
+ Yocto Project.
+ </para></listitem>
+ <listitem><para>
+ To report any inaccuracies or problems with this
+ manual, send an email to the Yocto Project
+ discussion group at
+ <filename>yocto@yoctoproject.com</filename> or log into
+ the freenode <filename>#yocto</filename> channel.
</para></listitem>
- <listitem><para>
- This version of the manual is version
- &YOCTO_DOC_VERSION;.
- For later releases of the Yocto Project (if they exist),
- go to the
- <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
- and use the drop-down "Active Releases" button
- and choose the Yocto Project version for which you want
- the manual.
- </para></listitem>
- <listitem><para>
- For an in-development version of the Yocto Project
- Board Support Package (BSP) Developer's Guide, see
- <ulink url='&YOCTO_DOCS_URL;/latest/bsp-guide/bsp-guide.html'></ulink>.
- </para></listitem>
- </itemizedlist>
- </note>
+ </itemizedlist>
+ </note>
</legalnotice>
</bookinfo>
diff --git a/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml b/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml
index a92e611..d7b6f15 100644
--- a/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml
+++ b/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml
@@ -55,7 +55,7 @@
To help understand the BSP layer concept, consider the BSPs that the
Yocto Project supports and provides with each release.
You can see the layers in the
- <ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>
+ <ulink url='&YOCTO_DOCS_REF_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>
through a web interface at
<ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'></ulink>.
If you go to that interface, you will find near the bottom of the list
@@ -83,12 +83,12 @@
<para>
For information on the BSP development workflow, see the
- "<ulink url='&YOCTO_DOCS_DEV_URL;#developing-a-board-support-package-bsp'>Developing a Board Support Package (BSP)</ulink>"
- section in the Yocto Project Development Manual.
+ "<link linkend='developing-a-board-support-package-bsp'>Developing a Board Support Package (BSP)</link>"
+ section.
For more information on how to set up a local copy of source files
from a Git repository, see the
- "<ulink url='&YOCTO_DOCS_DEV_URL;#getting-setup'>Getting Set Up</ulink>"
- section also in the Yocto Project Development Manual.
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+ section also in the Yocto Project Development Tasks Manual.
</para>
<para>
@@ -98,12 +98,10 @@
This root is what you add to the
<ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
variable in the <filename>conf/bblayers.conf</filename> file found in the
- <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
- which is established after you run one of the OpenEmbedded build environment
- setup scripts (i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
- and
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
+ <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+ which is established after you run the OpenEmbedded build environment
+ setup script (i.e.
+ <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>).
Adding the root allows the OpenEmbedded build system to recognize the BSP
definition and from it build an image.
Here is an example:
@@ -141,7 +139,149 @@
<para>
For more detailed information on layers, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
- section of the Yocto Project Development Manual.
+ section of the Yocto Project Development Tasks Manual.
+ </para>
+ </section>
+
+ <section id='preparing-your-build-host-to-work-with-bsp-layers'>
+ <title>Preparing Your Build Host to Work With BSP Layers</title>
+
+ <para>
+ This section describes how to get your build host ready
+ to work with BSP layers.
+ Once you have the host set up, you can create the layer
+ as described in the
+ "<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a new BSP Layer Using the yocto-bsp Script</link>"
+ section.
+ <note>
+ For structural information on BSPs, see the
+ <link linkend='bsp-filelayout'>Example Filesystem Layout</link>
+ section.
+ </note>
+ <orderedlist>
+ <listitem><para>
+ <emphasis>Set Up the Build Environment:</emphasis>
+ Be sure you are set up to use BitBake in a shell.
+ See the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#setting-up-the-development-host-to-use-the-yocto-project'>Setting Up the Development Host to Use the Yocto Project</ulink>"
+ section in the Yocto Project Development Tasks Manual for information
+ on how to get a build host ready that is either a native
+ Linux machine or a machine that uses CROPS.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Clone the <filename>poky</filename> Repository:</emphasis>
+ You need to have a local copy of the Yocto Project
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+ (i.e. a local <filename>poky</filename> repository).
+ See the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</ulink>"
+ and possibly the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking Out by Branch in Poky</ulink>"
+ and
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#checkout-out-by-tag-in-poky'>Checking Out by Tag in Poky</ulink>"
+ sections all in the Yocto Project Development Tasks Manual for
+ information on how to clone the <filename>poky</filename>
+ repository and check out the appropriate branch for your work.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Determine the BSP Layer You Want:</emphasis>
+ The Yocto Project supports many BSPs, which are maintained in
+ their own layers or in layers designed to contain several
+ BSPs.
+ To get an idea of machine support through BSP layers, you can
+ look at the
+ <ulink url='&YOCTO_RELEASE_DL_URL;/machines'>index of machines</ulink>
+ for the release.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Optionally Clone the
+ <filename>meta-intel</filename> BSP Layer:</emphasis>
+ If your hardware is based on current Intel CPUs and devices,
+ you can leverage this BSP layer.
+ For details on the <filename>meta-intel</filename> BSP layer,
+ see the layer's
+ <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel/tree/README'><filename>README</filename></ulink>
+ file.
+ <orderedlist>
+ <listitem><para>
+ <emphasis>Navigate to Your Source Directory:</emphasis>
+ Typically, you set up the
+ <filename>meta-intel</filename> Git repository
+ inside the
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+ (e.g. <filename>poky</filename>).
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Clone the Layer:</emphasis>
+ <literallayout class='monospaced'>
+ $ git clone git://git.yoctoproject.org/meta-intel.git
+ Cloning into 'meta-intel'...
+ remote: Counting objects: 14224, done.
+ remote: Compressing objects: 100% (4591/4591), done.
+ remote: Total 14224 (delta 8245), reused 13985 (delta 8006)
+ Receiving objects: 100% (14224/14224), 4.29 MiB | 2.90 MiB/s, done.
+ Resolving deltas: 100% (8245/8245), done.
+ Checking connectivity... done.
+ </literallayout>
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Check Out the Proper Branch:</emphasis>
+ The branch you check out for
+ <filename>meta-intel</filename> must match the same
+ branch you are using for the Yocto Project release
+ (e.g. &DISTRO_NAME_NO_CAP;):
+ <literallayout class='monospaced'>
+ $ git checkout <replaceable>branch_name</replaceable>
+ </literallayout>
+ For an example on how to discover branch names and
+ checkout on a branch, see the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking Out By Branch in Poky</ulink>"
+ section in the Yocto Project Development Tasks Manual.
+ </para></listitem>
+ </orderedlist>
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Optionally Set Up an Alternative BSP Layer:</emphasis>
+ If your hardware can be more closely leveraged to an
+ existing BSP not within the <filename>meta-intel</filename>
+ BSP layer, you can clone that BSP layer.</para>
+
+ <para>The process is identical to the process used for the
+ <filename>meta-intel</filename> layer except for the layer's
+ name.
+ For example, if you determine that your hardware most
+ closely matches the <filename>meta-minnow</filename>,
+ clone that layer:
+ <literallayout class='monospaced'>
+ $ git clone git://git.yoctoproject.org/meta-minnow
+ Cloning into 'meta-minnow'...
+ remote: Counting objects: 456, done.
+ remote: Compressing objects: 100% (283/283), done.
+ remote: Total 456 (delta 163), reused 384 (delta 91)
+ Receiving objects: 100% (456/456), 96.74 KiB | 0 bytes/s, done.
+ Resolving deltas: 100% (163/163), done.
+ Checking connectivity... done.
+ </literallayout>
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Initialize the Build Environment:</emphasis>
+ While in the root directory of the Source Directory (i.e.
+ <filename>poky</filename>), run the
+ <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
+ environment setup script to define the OpenEmbedded
+ build environment on your build host.
+ <literallayout class='monospaced'>
+ $ source &OE_INIT_FILE;
+ </literallayout>
+ Among other things, the script creates the
+ <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+ which is <filename>build</filename> in this case
+ and is located in the
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+ After the script runs, your current working directory
+ is set to the <filename>build</filename> directory.
+ </para></listitem>
+ </orderedlist>
</para>
</section>
@@ -417,7 +557,7 @@
released with the BSP.
The information in the <filename>README.sources</filename>
file also helps you find the
- <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+ <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
used to generate the images that ship with the BSP.
<note>
If the BSP's <filename>binary</filename> directory is
@@ -522,7 +662,7 @@
<para>
This file simply makes
- <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
+ <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>
aware of the recipes and configuration directories.
The file must exist so that the OpenEmbedded build system can recognize the BSP.
</para>
@@ -571,7 +711,7 @@
<para>
Tuning files are found in the <filename>meta/conf/machine/include</filename>
directory within the
- <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
For example, the <filename>ia32-base.inc</filename> file resides in the
<filename>meta/conf/machine/include</filename> directory.
</para>
@@ -627,7 +767,7 @@
formfactor recipe
<filename>meta/recipes-bsp/formfactor/formfactor_0.0.bb</filename>,
which is found in the
- <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
</para></note>
</section>
@@ -667,7 +807,7 @@
<para>
For your BSP, you typically want to use an existing Yocto
Project kernel recipe found in the
- <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
at <filename>meta/recipes-kernel/linux</filename>.
You can append machine-specific changes to the kernel recipe
by using a similarly named append file, which is located in
@@ -710,6 +850,204 @@
</section>
</section>
+ <section id='developing-a-board-support-package-bsp'>
+ <title>Developing a Board Support Package (BSP)</title>
+
+ <para>
+ This section contains the high-level procedure you can follow
+ to create a BSP using the Yocto Project's
+ <link linkend='using-the-yocto-projects-bsp-tools'>BSP Tools</link>.
+ Although not required for BSP creation, the
+ <filename>meta-intel</filename> repository, which contains
+ many BSPs supported by the Yocto Project, is part of the
+ example.
+ </para>
+
+ <para>
+ For an example that shows how to create a new layer using
+ the tools, see the
+ "<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</link>"
+ section.
+ </para>
+
+ <para>
+ The following illustration and list summarize the BSP
+ creation general workflow.
+ </para>
+
+ <para>
+ <imagedata fileref="figures/bsp-dev-flow.png" width="7in" depth="5in" align="center" scalefit="1" />
+ </para>
+
+ <para>
+ <orderedlist>
+ <listitem><para>
+ <emphasis>Set up Your Host Development System to Support
+ Development Using the Yocto Project</emphasis>:
+ See the
+ "<ulink url='&YOCTO_DOCS_QS_URL;#yp-resources'>Setting Up to Use the Yocto Project</ulink>"
+ section in the Yocto Project Quick Start for options on how
+ to get a build host ready to use the Yocto Project.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Establish the <filename>meta-intel</filename>
+ Repository on Your System:</emphasis>
+ Having local copies of these supported BSP layers on
+ your system gives you access to layers you might be able
+ to build on or modify to create your BSP.
+ For information on how to get these files, see the
+ "<link linkend='preparing-your-build-host-to-work-with-bsp-layers'>Preparing Your Build Host to Work with BSP Layers</link>"
+ section.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Create Your Own BSP Layer Using the
+ <link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'><filename>yocto-bsp</filename></link>
+ script:</emphasis>
+ Layers are ideal for isolating and storing work for a
+ given piece of hardware.
+ A layer is really just a location or area in which you
+ place the recipes and configurations for your BSP.
+ In fact, a BSP is, in itself, a special type of layer.
+ The simplest way to create a new BSP layer that is
+ compliant with the Yocto Project is to use the
+ <filename>yocto-bsp</filename> script.
+ For information about that script, see the
+ "<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</link>"
+ section.</para>
+
+ <para>Another example that illustrates a layer
+ is an application.
+ Suppose you are creating an application that has
+ library or other dependencies in order for it to
+ compile and run.
+ The layer, in this case, would be where all the
+ recipes that define those dependencies are kept.
+ The key point for a layer is that it is an isolated
+ area that contains all the relevant information for
+ the project that the OpenEmbedded build system knows
+ about.
+ For more information on layers, see the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+ section in the Yocto Project Development Tasks Manual.
+ For more information on BSP layers, see the
+ "<link linkend='bsp-layers'>BSP Layers</link>"
+ section.
+ <note><title>Notes</title>
+ <para>Five BSPs exist that are part of the Yocto
+ Project release:
+ <filename>beaglebone</filename> (ARM),
+ <filename>mpc8315e</filename> (PowerPC),
+ and <filename>edgerouter</filename> (MIPS).
+ The recipes and configurations for these five BSPs
+ are located and dispersed within the
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+ </para>
+
+ <para>Three core Intel BSPs exist as part of the Yocto
+ Project release in the
+ <filename>meta-intel</filename> layer:
+ <itemizedlist>
+ <listitem><para>
+ <filename>intel-core2-32</filename>,
+ which is a BSP optimized for the Core2 family of CPUs
+ as well as all CPUs prior to the Silvermont core.
+ </para></listitem>
+ <listitem><para>
+ <filename>intel-corei7-64</filename>,
+ which is a BSP optimized for Nehalem and later
+ Core and Xeon CPUs as well as Silvermont and later
+ Atom CPUs, such as the Baytrail SoCs.
+ </para></listitem>
+ <listitem><para>
+ <filename>intel-quark</filename>,
+ which is a BSP optimized for the Intel Galileo
+ gen1 & gen2 development boards.
+ </para></listitem>
+ </itemizedlist></para>
+ </note></para>
+
+ <para>When you set up a layer for a new BSP, you should
+ follow a standard layout.
+ This layout is described in the
+ "<link linkend='bsp-filelayout'>Example Filesystem Layout</link>"
+ section.
+ In the standard layout, you will notice a suggested
+ structure for recipes and configuration information.
+ You can see the standard layout for a BSP by examining
+ any supported BSP found in the
+ <filename>meta-intel</filename> layer inside the Source
+ Directory.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Make Configuration Changes to Your New BSP
+ Layer:</emphasis>
+ The standard BSP layer structure organizes the files
+ you need to edit in <filename>conf</filename> and
+ several <filename>recipes-*</filename>
+ directories within the BSP layer.
+ Configuration changes identify where your new layer
+ is on the local system and identify which kernel you
+ are going to use.
+ When you run the <filename>yocto-bsp</filename> script,
+ you are able to interactively configure many things for
+ the BSP (e.g. keyboard, touchscreen, and so forth).
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Make Recipe Changes to Your New BSP
+ Layer:</emphasis>
+ Recipe changes include altering recipes
+ (<filename>.bb</filename> files), removing recipes you
+ do not use, and adding new recipes or append files
+ (<filename>.bbappend</filename>) that you need to
+ support your hardware.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Prepare for the Build:</emphasis>
+ Once you have made all the changes to your BSP layer,
+ there remains a few things you need to do for the
+ OpenEmbedded build system in order for it to create
+ your image.
+ You need to get the build environment ready by
+ sourcing an environment setup script
+ (i.e. <filename>oe-init-build-env</filename>)
+ and you need to be sure two key configuration
+ files are configured appropriately: the
+ <filename>conf/local.conf</filename> and the
+ <filename>conf/bblayers.conf</filename> file.
+ You must make the OpenEmbedded build system aware
+ of your new layer.
+ See the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#enabling-your-layer'>Enabling Your Layer</ulink>"
+ section in the Yocto Project Development Tasks Manual
+ for information on how to let the build system
+ know about your new layer.</para>
+
+ <para>The entire process for building an image is
+ overviewed in the section
+ "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>" section
+ of the Yocto Project Quick Start.
+ You might want to reference this information.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Build the Image:</emphasis>
+ The OpenEmbedded build system uses the BitBake tool
+ to build images based on the type of image you want to
+ create.
+ You can find more information about BitBake in the
+ <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
+ </para>
+
+ <para>The build process supports several types of
+ images to satisfy different needs.
+ See the
+ "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
+ chapter in the Yocto Project Reference Manual for
+ information on supported images.
+ </para></listitem>
+ </orderedlist>
+ </para>
+ </section>
+
<section id='requirements-and-recommendations-for-released-bsps'>
<title>Requirements and Recommendations for Released BSPs</title>
@@ -732,24 +1070,28 @@
For guidelines on creating a layer that meets these base requirements, see the
"<link linkend='bsp-layers'>BSP Layers</link>" and the
"<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding
- and Creating Layers"</ulink> in the Yocto Project Development Manual.</para></listitem>
+ and Creating Layers"</ulink> in the Yocto Project Development Tasks Manual.
+ </para></listitem>
<listitem><para>The requirements in this section apply regardless of how you
package a BSP.
You should consult the packaging and distribution guidelines for your
specific release process.
For an example of packaging and distribution requirements, see the
"<ulink url='https://wiki.yoctoproject.org/wiki/Third_Party_BSP_Release_Process'>Third Party BSP Release Process</ulink>"
- wiki page.</para></listitem>
+ wiki page.
+ </para></listitem>
<listitem><para>The requirements for the BSP as it is made available to a developer
are completely independent of the released form of the BSP.
For example, the BSP Metadata can be contained within a Git repository
and could have a directory structure completely different from what appears
- in the officially released BSP layer.</para></listitem>
+ in the officially released BSP layer.
+ </para></listitem>
<listitem><para>It is not required that specific packages or package
modifications exist in the BSP layer, beyond the requirements for general
compliance with the Yocto Project.
For example, no requirement exists dictating that a specific kernel or
- kernel version be used in a given BSP.</para></listitem>
+ kernel version be used in a given BSP.
+ </para></listitem>
</itemizedlist>
</para>
@@ -776,7 +1118,7 @@
<filename>recipes-*</filename> subdirectory.
You can find <filename>recipes.txt</filename> in the
<filename>meta</filename> directory of the
- <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
or in the OpenEmbedded Core Layer
(<filename>openembedded-core</filename>) found at
<ulink url='http://git.openembedded.org/openembedded-core/tree/meta'></ulink>.
@@ -832,8 +1174,8 @@
This is the person to whom patches and questions should
be sent.
For information on how to find the right person, see the
- "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>How to Submit a Change</ulink>"
- section in the Yocto Project Development Manual.
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>Submitting a Change to the Yocto Project</ulink>"
+ section in the Yocto Project Development Tasks Manual.
</para></listitem>
<listitem><para>Instructions on how to build the BSP using the BSP
layer.</para></listitem>
@@ -937,7 +1279,7 @@
file for the modified recipe.
For information on using append files, see the
"<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files in Your Layer</ulink>"
- section in the Yocto Project Development Manual.
+ section in the Yocto Project Development Tasks Manual.
</para></listitem>
<listitem><para>
Ensure your directory structure in the BSP layer
@@ -1144,7 +1486,7 @@
<para>
Designed to have a command interface somewhat like
- <ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>, each
+ <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>, each
tool is structured as a set of sub-commands under a
top-level command.
The top-level command (<filename>yocto-bsp</filename>
@@ -1155,7 +1497,7 @@
<para>
Both tools reside in the <filename>scripts/</filename> subdirectory
- of the <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+ of the <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
Consequently, to use the scripts, you must <filename>source</filename> the
environment just as you would when invoking a build:
<literallayout class='monospaced'>
@@ -1251,11 +1593,11 @@
necessary to create a BSP and perform basic kernel maintenance on that BSP using
the tools.
<note>
- You can also use the <filename>yocto-layer</filename> tool to create
+ You can also use the <filename>bitbake-layers</filename> script to create
a "generic" layer.
- For information on this tool, see the
- "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</ulink>"
- section in the Yocto Project Development Guide.
+ For information on using this script to create a layer, see the
+ "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
+ section in the Yocto Project Development Tasks Manual.
</note>
</para>
@@ -1384,8 +1726,7 @@
you do want to use.</para></listitem>
<listitem><para>Next, the script asks whether you would like to have a new
branch created especially for your BSP in the local
- <ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Linux Yocto Kernel</ulink>
- Git repository .
+ Linux Yocto Kernel Git repository .
If not, then the script re-uses an existing branch.</para>
<para>In this example, the default (or "yes") is accepted.
Thus, a new branch is created for the BSP rather than using a common, shared
@@ -1406,7 +1747,7 @@
Defaults are accepted for each.</para></listitem>
<listitem><para>By default, the script creates the new BSP Layer in the
current working directory of the
- <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
(i.e. <filename>poky/build</filename>).
</para></listitem>
</orderedlist>
diff --git a/import-layers/yocto-poky/documentation/bsp-guide/figures/bsp-dev-flow.png b/import-layers/yocto-poky/documentation/bsp-guide/figures/bsp-dev-flow.png
new file mode 100644
index 0000000..0f82a1f
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/bsp-guide/figures/bsp-dev-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml
index 598f877..0081738 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml
@@ -18,7 +18,8 @@
<para>
The OpenEmbedded build system supports organizing
- <link linkend='metadata'>Metadata</link> into multiple layers.
+ <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink> into
+ multiple layers.
Layers allow you to isolate different types of customizations from
each other.
You might find it tempting to keep everything in one layer when
@@ -58,7 +59,7 @@
<title>Layers</title>
<para>
- The <link linkend='source-directory'>Source Directory</link>
+ The <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
contains both general layers and BSP
layers right out of the box.
You can easily identify layers that ship with a
@@ -107,7 +108,7 @@
"<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</ulink>"
section in the Yocto Project Board Support Package (BSP)
Developer's Guide and the
- "<link linkend='creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</link>"
+ "<link linkend='creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</link>"
section further down in this manual.
</para>
@@ -254,197 +255,182 @@
</section>
<section id='best-practices-to-follow-when-creating-layers'>
- <title>Best Practices to Follow When Creating Layers</title>
+ <title>Following Best Practices When Creating Layers</title>
<para>
To create layers that are easier to maintain and that will
not impact builds for other machines, you should consider the
- information in the following sections.
- </para>
-
- <section id='avoid-overlaying-entire-recipes'>
- <title>Avoid "Overlaying" Entire Recipes</title>
-
- <para>
- Avoid "overlaying" entire recipes from other layers in your
- configuration.
- In other words, do not copy an entire recipe into your
- layer and then modify it.
- Rather, use an append file (<filename>.bbappend</filename>)
- to override
- only those parts of the original recipe you need to modify.
- </para>
- </section>
-
- <section id='avoid-duplicating-include-files'>
- <title>Avoid Duplicating Include Files</title>
-
- <para>
- Avoid duplicating include files.
- Use append files (<filename>.bbappend</filename>)
- for each recipe
- that uses an include file.
- Or, if you are introducing a new recipe that requires
- the included file, use the path relative to the original
- layer directory to refer to the file.
- For example, use
- <filename>require recipes-core/</filename><replaceable>package</replaceable><filename>/</filename><replaceable>file</replaceable><filename>.inc</filename>
- instead of <filename>require </filename><replaceable>file</replaceable><filename>.inc</filename>.
- If you're finding you have to overlay the include file,
- it could indicate a deficiency in the include file in
- the layer to which it originally belongs.
- If this is the case, you should try to address that
- deficiency instead of overlaying the include file.
- For example, you could address this by getting the
- maintainer of the include file to add a variable or
- variables to make it easy to override the parts needing
- to be overridden.
- </para>
- </section>
-
- <section id='structure-your-layers'>
- <title>Structure Your Layers</title>
-
- <para>
- Proper use of overrides within append files and placement
- of machine-specific files within your layer can ensure that
- a build is not using the wrong Metadata and negatively
- impacting a build for a different machine.
- Following are some examples:
- <itemizedlist>
- <listitem><para><emphasis>Modifying Variables to Support
- a Different Machine:</emphasis>
- Suppose you have a layer named
- <filename>meta-one</filename> that adds support
- for building machine "one".
- To do so, you use an append file named
- <filename>base-files.bbappend</filename> and
- create a dependency on "foo" by altering the
- <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
- variable:
- <literallayout class='monospaced'>
+ information in the following list:
+ <itemizedlist>
+ <listitem><para>
+ <emphasis>Avoid "Overlaying" Entire Recipes from Other Layers in Your Configuration:</emphasis>
+ In other words, do not copy an entire recipe into your
+ layer and then modify it.
+ Rather, use an append file
+ (<filename>.bbappend</filename>) to override only those
+ parts of the original recipe you need to modify.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Avoid Duplicating Include Files:</emphasis>
+ Use append files (<filename>.bbappend</filename>)
+ for each recipe that uses an include file.
+ Or, if you are introducing a new recipe that requires
+ the included file, use the path relative to the
+ original layer directory to refer to the file.
+ For example, use
+ <filename>require recipes-core/</filename><replaceable>package</replaceable><filename>/</filename><replaceable>file</replaceable><filename>.inc</filename>
+ instead of
+ <filename>require </filename><replaceable>file</replaceable><filename>.inc</filename>.
+ If you're finding you have to overlay the include file,
+ it could indicate a deficiency in the include file in
+ the layer to which it originally belongs.
+ If this is the case, you should try to address that
+ deficiency instead of overlaying the include file.
+ For example, you could address this by getting the
+ maintainer of the include file to add a variable or
+ variables to make it easy to override the parts needing
+ to be overridden.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Structure Your Layers:</emphasis>
+ Proper use of overrides within append files and
+ placement of machine-specific files within your layer
+ can ensure that a build is not using the wrong Metadata
+ and negatively impacting a build for a different
+ machine.
+ Following are some examples:
+ <itemizedlist>
+ <listitem><para>
+ <emphasis>Modify Variables to Support a
+ Different Machine:</emphasis>
+ Suppose you have a layer named
+ <filename>meta-one</filename> that adds support
+ for building machine "one".
+ To do so, you use an append file named
+ <filename>base-files.bbappend</filename> and
+ create a dependency on "foo" by altering the
+ <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
+ variable:
+ <literallayout class='monospaced'>
DEPENDS = "foo"
- </literallayout>
- The dependency is created during any build that
- includes the layer
- <filename>meta-one</filename>.
- However, you might not want this dependency
- for all machines.
- For example, suppose you are building for
- machine "two" but your
- <filename>bblayers.conf</filename> file has the
- <filename>meta-one</filename> layer included.
- During the build, the
- <filename>base-files</filename> for machine
- "two" will also have the dependency on
- <filename>foo</filename>.</para>
- <para>To make sure your changes apply only when
- building machine "one", use a machine override
- with the <filename>DEPENDS</filename> statement:
- <literallayout class='monospaced'>
+ </literallayout>
+ The dependency is created during any build that
+ includes the layer
+ <filename>meta-one</filename>.
+ However, you might not want this dependency
+ for all machines.
+ For example, suppose you are building for
+ machine "two" but your
+ <filename>bblayers.conf</filename> file has the
+ <filename>meta-one</filename> layer included.
+ During the build, the
+ <filename>base-files</filename> for machine
+ "two" will also have the dependency on
+ <filename>foo</filename>.</para>
+ <para>To make sure your changes apply only when
+ building machine "one", use a machine override
+ with the <filename>DEPENDS</filename> statement:
+ <literallayout class='monospaced'>
DEPENDS_one = "foo"
- </literallayout>
- You should follow the same strategy when using
- <filename>_append</filename> and
- <filename>_prepend</filename> operations:
- <literallayout class='monospaced'>
+ </literallayout>
+ You should follow the same strategy when using
+ <filename>_append</filename> and
+ <filename>_prepend</filename> operations:
+ <literallayout class='monospaced'>
DEPENDS_append_one = " foo"
DEPENDS_prepend_one = "foo "
- </literallayout>
- As an actual example, here's a line from the recipe
- for gnutls, which adds dependencies on
- "argp-standalone" when building with the musl C
- library:
- <literallayout class='monospaced'>
+ </literallayout>
+ As an actual example, here's a line from the recipe
+ for gnutls, which adds dependencies on
+ "argp-standalone" when building with the musl C
+ library:
+ <literallayout class='monospaced'>
DEPENDS_append_libc-musl = " argp-standalone"
- </literallayout>
- <note>
- Avoiding "+=" and "=+" and using
- machine-specific
- <filename>_append</filename>
- and <filename>_prepend</filename> operations
- is recommended as well.
- </note></para></listitem>
- <listitem><para><emphasis>Place Machine-Specific Files
- in Machine-Specific Locations:</emphasis>
- When you have a base recipe, such as
- <filename>base-files.bb</filename>, that
- contains a
- <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
- statement to a file, you can use an append file
- to cause the build to use your own version of
- the file.
- For example, an append file in your layer at
- <filename>meta-one/recipes-core/base-files/base-files.bbappend</filename>
- could extend
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
- using
- <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
- as follows:
- <literallayout class='monospaced'>
+ </literallayout>
+ <note>
+ Avoiding "+=" and "=+" and using
+ machine-specific
+ <filename>_append</filename>
+ and <filename>_prepend</filename> operations
+ is recommended as well.
+ </note>
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Place Machine-Specific Files in
+ Machine-Specific Locations:</emphasis>
+ When you have a base recipe, such as
+ <filename>base-files.bb</filename>, that
+ contains a
+ <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+ statement to a file, you can use an append file
+ to cause the build to use your own version of
+ the file.
+ For example, an append file in your layer at
+ <filename>meta-one/recipes-core/base-files/base-files.bbappend</filename>
+ could extend
+ <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
+ using
+ <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
+ as follows:
+ <literallayout class='monospaced'>
FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
- </literallayout>
- The build for machine "one" will pick up your
- machine-specific file as long as you have the
- file in
- <filename>meta-one/recipes-core/base-files/base-files/</filename>.
- However, if you are building for a different
- machine and the
- <filename>bblayers.conf</filename> file includes
- the <filename>meta-one</filename> layer and
- the location of your machine-specific file is
- the first location where that file is found
- according to <filename>FILESPATH</filename>,
- builds for all machines will also use that
- machine-specific file.</para>
- <para>You can make sure that a machine-specific
- file is used for a particular machine by putting
- the file in a subdirectory specific to the
- machine.
- For example, rather than placing the file in
- <filename>meta-one/recipes-core/base-files/base-files/</filename>
- as shown above, put it in
- <filename>meta-one/recipes-core/base-files/base-files/one/</filename>.
- Not only does this make sure the file is used
- only when building for machine "one", but the
- build process locates the file more quickly.</para>
- <para>In summary, you need to place all files
- referenced from <filename>SRC_URI</filename>
- in a machine-specific subdirectory within the
- layer in order to restrict those files to
- machine-specific builds.</para></listitem>
- </itemizedlist>
- </para>
- </section>
-
- <section id='other-recommendations'>
- <title>Other Recommendations</title>
-
- <para>
- We also recommend the following:
- <itemizedlist>
- <listitem><para>If you want permission to use the
- Yocto Project Compatibility logo with your layer
- or application that uses your layer, perform the
- steps to apply for compatibility.
- See the
- "<link linkend='making-sure-your-layer-is-compatible-with-yocto-project'>Making Sure Your Layer is Compatible With Yocto Project</link>"
- section for more information.
- </para></listitem>
- <listitem><para>Store custom layers in a Git repository
- that uses the
- <filename>meta-<replaceable>layer_name</replaceable></filename> format.
- </para></listitem>
- <listitem><para>Clone the repository alongside other
- <filename>meta</filename> directories in the
- <link linkend='source-directory'>Source Directory</link>.
- </para></listitem>
- </itemizedlist>
- Following these recommendations keeps your Source Directory and
- its configuration entirely inside the Yocto Project's core
- base.
- </para>
- </section>
+ </literallayout>
+ The build for machine "one" will pick up your
+ machine-specific file as long as you have the
+ file in
+ <filename>meta-one/recipes-core/base-files/base-files/</filename>.
+ However, if you are building for a different
+ machine and the
+ <filename>bblayers.conf</filename> file includes
+ the <filename>meta-one</filename> layer and
+ the location of your machine-specific file is
+ the first location where that file is found
+ according to <filename>FILESPATH</filename>,
+ builds for all machines will also use that
+ machine-specific file.</para>
+ <para>You can make sure that a machine-specific
+ file is used for a particular machine by putting
+ the file in a subdirectory specific to the
+ machine.
+ For example, rather than placing the file in
+ <filename>meta-one/recipes-core/base-files/base-files/</filename>
+ as shown above, put it in
+ <filename>meta-one/recipes-core/base-files/base-files/one/</filename>.
+ Not only does this make sure the file is used
+ only when building for machine "one", but the
+ build process locates the file more quickly.</para>
+ <para>In summary, you need to place all files
+ referenced from <filename>SRC_URI</filename>
+ in a machine-specific subdirectory within the
+ layer in order to restrict those files to
+ machine-specific builds.
+ </para></listitem>
+ </itemizedlist>
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Perform Steps to Apply for Yocto Project Compatibility:</emphasis>
+ If you want permission to use the
+ Yocto Project Compatibility logo with your layer
+ or application that uses your layer, perform the
+ steps to apply for compatibility.
+ See the
+ "<link linkend='making-sure-your-layer-is-compatible-with-yocto-project'>Making Sure Your Layer is Compatible With Yocto Project</link>"
+ section for more information.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Follow the Layer Naming Convention:</emphasis>
+ Store custom layers in a Git repository that use the
+ <filename>meta-<replaceable>layer_name</replaceable></filename>
+ format.
+ </para></listitem>
+ <listitem><para>
+ <emphasis>Group Your Layers Locally:</emphasis>
+ Clone your repository alongside other cloned
+ <filename>meta</filename> directories from the
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+ </para></listitem>
+ </itemizedlist>
+ </para>
</section>
<section id='making-sure-your-layer-is-compatible-with-yocto-project'>
@@ -456,65 +442,74 @@
existing Yocto Project layers (i.e. the layer is compatible
with the Yocto Project).
Ensuring compatibility makes the layer easy to be consumed
- by others in the Yocto Project community and allows you
- permission to use the Yocto Project Compatibility logo.
- </para>
-
- <para>
- Version 1.0 of the Yocto Project Compatibility Program has
- been in existence for a number of releases.
- This version of the program consists of the layer application
- process that requests permission to use the Yocto Project
- Compatibility logo for your layer and application.
- You can find version 1.0 of the form at
- <ulink url='https://www.yoctoproject.org/webform/yocto-project-compatible-registration'></ulink>.
- To be granted permission to use the logo, you need to be able
- to answer "Yes" to the questions or have an acceptable
- explanation for any questions answered "No".
- </para>
-
- <para>
- A second version (2.0) of the Yocto Project Compatibility
- Program is currently under development.
- Included as part of version 2.0 (and currently available) is
- the <filename>yocto-compat-layer.py</filename> script.
- When run against a layer, this script tests the layer against
- tighter constraints based on experiences of how layers have
- worked in the real world and where pitfalls have been found.
- </para>
-
- <para>
- Part of the 2.0 version of the program that is not currently
- available but is in development is an updated compatibility
- application form.
- This updated form, among other questions, specifically
- asks if your layer has passed the test using the
- <filename>yocto-compat-layer.py</filename> script.
- <note><title>Tip</title>
- Even though the updated application form is currently
- unavailable for version 2.0 of the Yocto Project
- Compatibility Program, the
- <filename>yocto-compat-layer.py</filename> script is
- available in OE-Core.
- You can use the script to assess the status of your
- layers in advance of the 2.0 release of the program.
+ by others in the Yocto Project community and could allow you
+ permission to use the Yocto Project Compatible Logo.
+ <note>
+ Only Yocto Project member organizations are permitted to
+ use the Yocto Project Compatible Logo.
+ The logo is not available for general use.
+ For information on how to become a Yocto Project member
+ organization, see the
+ <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>.
</note>
</para>
<para>
- The remainder of this section presents information on the
- version 1.0 registration form and on the
- <filename>yocto-compat-layer.py</filename> script.
+ The Yocto Project Compatibility Program consists of a layer
+ application process that requests permission to use the Yocto
+ Project Compatibility Logo for your layer and application.
+ The process consists of two parts:
+ <orderedlist>
+ <listitem><para>
+ Successfully passing a script
+ (<filename>yocto-check-layer</filename>) that
+ when run against your layer, tests it against
+ constraints based on experiences of how layers have
+ worked in the real world and where pitfalls have been
+ found.
+ Getting a "PASS" result from the script is required for
+ successful compatibility registration.
+ </para></listitem>
+ <listitem><para>
+ Completion of an application acceptance form, which
+ you can find at
+ <ulink url='https://www.yoctoproject.org/webform/yocto-project-compatible-registration'></ulink>.
+ </para></listitem>
+ </orderedlist>
</para>
- <section id='yocto-project-compatibility-program-application'>
- <title>Yocto Project Compatibility Program Application</title>
+ <para>
+ To be granted permission to use the logo, you need to satisfy
+ the following:
+ <itemizedlist>
+ <listitem><para>
+ Be able to check the box indicating that you
+ got a "PASS" when running the script against your
+ layer.
+ </para></listitem>
+ <listitem><para>
+ Answer "Yes" to the questions on the form or have an
+ acceptable explanation for any questions answered "No".
+ </para></listitem>
+ <listitem><para>
+ You need to be a Yocto Project Member Organization.
+ </para></listitem>
+ </itemizedlist>
+ </para>
+
+ <para>
+ The remainder of this section presents information on the
+ registration form and on the
+ <filename>yocto-check-layer</filename> script.
+ </para>
+
+ <section id='yocto-project-compatible-program-application'>
+ <title>Yocto Project Compatible Program Application</title>
<para>
- Use the 1.0 version of the form to apply for your
- layer's compatibility approval.
+ Use the form to apply for your layer's approval.
Upon successful application, you can use the Yocto
- Project Compatibility logo with your layer and the
+ Project Compatibility Logo with your layer and the
application that uses your layer.
</para>
@@ -552,26 +547,18 @@
</para>
</section>
- <section id='yocto-compat-layer-py-script'>
- <title><filename>yocto-compat-layer.py</filename> Script</title>
+ <section id='yocto-check-layer-script'>
+ <title><filename>yocto-check-layer</filename> Script</title>
<para>
- The <filename>yocto-compat-layer.py</filename> script,
- which is currently available, provides you a way to
- assess how compatible your layer is with the Yocto
- Project.
+ The <filename>yocto-check-layer</filename> script
+ provides you a way to assess how compatible your layer is
+ with the Yocto Project.
You should run this script prior to using the form to
apply for compatibility as described in the previous
section.
- <note>
- Because the script is part of the 2.0 release of the
- Yocto Project Compatibility Program, you are not
- required to successfully run your layer against it
- in order to be granted compatibility status.
- However, it is a good idea as it promotes
- well-behaved layers and gives you an idea of where your
- layer stands regarding compatibility.
- </note>
+ You need to achieve a "PASS" result in order to have
+ your application form successfully processed.
</para>
<para>
@@ -588,7 +575,7 @@
your build directory:
<literallayout class='monospaced'>
$ source oe-init-build-env
- $ yocto-compat-layer.py <replaceable>your_layer_directory</replaceable>
+ $ yocto-check-layer <replaceable>your_layer_directory</replaceable>
</literallayout>
Be sure to provide the actual directory for your layer
as part of the command.
@@ -655,7 +642,7 @@
<filename><ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'>BBLAYERS</ulink></filename>
variable in your <filename>conf/bblayers.conf</filename> file,
which is found in the
- <link linkend='build-directory'>Build Directory</link>.
+ <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
The following example shows how to enable a layer named
<filename>meta-mylayer</filename>:
<literallayout class='monospaced'>
@@ -736,7 +723,7 @@
<para>
As an example, consider the main formfactor recipe and a
corresponding formfactor append file both from the
- <link linkend='source-directory'>Source Directory</link>.
+ <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
Here is the main formfactor recipe, which is named
<filename>formfactor_0.0.bb</filename> and located in the
"meta" layer at
@@ -975,128 +962,163 @@
...
EXTRA_OECONF = "--enable-something --enable-somethingelse"
...
- </literallayout></para></listitem>
- </itemizedlist></para></listitem>
+ </literallayout>
+ </para></listitem>
+ </itemizedlist>
+ </para></listitem>
+ <listitem><para>
+ <emphasis><filename>layerindex-fetch</filename>:</emphasis>
+ Fetches a layer from a layer index, along with its
+ dependent layers, and adds the layers to the
+ <filename>conf/bblayers.conf</filename> file.
+ </para></listitem>
+ <listitem><para>
+ <emphasis><filename>layerindex-show-depends</filename>:</emphasis>
+ Finds layer dependencies from the layer index.
+ </para></listitem>
+ <listitem><para>
+ <emphasis><filename>create-layer</filename>:</emphasis>
+ Creates a basic layer.
+ </para></listitem>
</itemizedlist>
</para>
</section>
- <section id='creating-a-general-layer-using-the-yocto-layer-script'>
- <title>Creating a General Layer Using the yocto-layer Script</title>
+ <section id='creating-a-general-layer-using-the-bitbake-layers-script'>
+ <title>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</title>
<para>
- The <filename>yocto-layer</filename> script simplifies
+ The <filename>bitbake-layers</filename> script with the
+ <filename>create-layer</filename> subcommand simplifies
creating a new general layer.
- <note>
- For information on BSP layers, see the
- "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
- section in the Yocto Project Board Specific (BSP)
- Developer's Guide.
+ <note><title>Notes</title>
+ <itemizedlist>
+ <listitem><para>
+ For information on BSP layers, see the
+ "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
+ section in the Yocto Project Board Specific (BSP)
+ Developer's Guide.
+ </para></listitem>
+ <listitem><para>
+ The <filename>bitbake-layers</filename> script
+ replaces the <filename>yocto-layer</filename>
+ script, which is deprecated in the Yocto Project
+ 2.4 release.
+ The <filename>yocto-layer</filename> script
+ continues to function as part of the 2.4 release
+ but will be removed post 2.4.
+ </para></listitem>
+ </itemizedlist>
</note>
- The default mode of the script's operation is to prompt you for
- information needed to generate the layer:
+ The default mode of the script's operation with this
+ subcommand is to create a layer with the following:
<itemizedlist>
- <listitem><para>The layer priority.
+ <listitem><para>A layer priority of 6.
</para></listitem>
- <listitem><para>Whether or not to create a sample recipe.
+ <listitem><para>A <filename>conf</filename>
+ subdirectory that contains a
+ <filename>layer.conf</filename> file.
</para></listitem>
- <listitem><para>Whether or not to create a sample
- append file.
+ <listitem><para>
+ A <filename>recipes-example</filename> subdirectory
+ that contains a further subdirectory named
+ <filename>example</filename>, which contains
+ an <filename>example.bb</filename> recipe file.
+ </para></listitem>
+ <listitem><para>A <filename >COPYING.MIT</filename>,
+ which is the license statement for the layer.
+ The script assumes you want to use the MIT license,
+ which is typical for most layers, for the contents of
+ the layer itself.
+ </para></listitem>
+ <listitem><para>
+ A <filename>README</filename> file, which is a file
+ describing the contents of your new layer.
</para></listitem>
</itemizedlist>
</para>
<para>
- Use the <filename>yocto-layer create</filename> sub-command
- to create a new general layer.
- In its simplest form, you can create a layer as follows:
+ In its simplest form, you can use the following command form
+ to create a layer.
+ The command creates a layer whose name corresponds to
+ <replaceable>your_layer_name</replaceable> in the current
+ directory:
<literallayout class='monospaced'>
- $ yocto-layer create mylayer
+ $ bitbake-layers create-layer <replaceable>your_layer_name</replaceable>
</literallayout>
- The previous example creates a layer named
- <filename>meta-mylayer</filename> in the current directory.
</para>
<para>
- As the <filename>yocto-layer create</filename> command runs,
- default values for the prompts appear in brackets.
- Pressing enter without supplying anything for the prompts
- or pressing enter and providing an invalid response causes the
- script to accept the default value.
- Once the script completes, the new layer
- is created in the current working directory.
- The script names the layer by prepending
- <filename>meta-</filename> to the name you provide.
+ If you want to set the priority of the layer to other than the
+ default value of "6", you can either use the
+ <filename>‐‐priority</filename> option or you can
+ edit the
+ <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_PRIORITY'><filename>BBFILE_PRIORITY</filename></ulink>
+ value in the <filename>conf/layer.conf</filename> after the
+ script creates it.
+ Furthermore, if you want to give the example recipe file
+ some name other than the default, you can
+ use the
+ <filename>‐‐example-recipe-name</filename> option.
</para>
<para>
- Minimally, the script creates the following within the layer:
- <itemizedlist>
- <listitem><para><emphasis>The <filename>conf</filename>
- directory:</emphasis>
- This directory contains the layer's configuration file.
- The root name for the file is the same as the root name
- your provided for the layer (e.g.
- <filename><replaceable>layer</replaceable>.conf</filename>).
- </para></listitem>
- <listitem><para><emphasis>The
- <filename>COPYING.MIT</filename> file:</emphasis>
- The copyright and use notice for the software.
- </para></listitem>
- <listitem><para><emphasis>The <filename>README</filename>
- file:</emphasis>
- A file describing the contents of your new layer.
- </para></listitem>
- </itemizedlist>
- </para>
-
- <para>
- If you choose to generate a sample recipe file, the script
- prompts you for the name for the recipe and then creates it
- in <filename><replaceable>layer</replaceable>/recipes-example/example/</filename>.
- The script creates a <filename>.bb</filename> file and a
- directory, which contains a sample
- <filename>helloworld.c</filename> source file, along with
- a sample patch file.
- If you do not provide a recipe name, the script uses
- "example".
- </para>
-
- <para>
- If you choose to generate a sample append file, the script
- prompts you for the name for the file and then creates it
- in <filename><replaceable>layer</replaceable>/recipes-example-bbappend/example-bbappend/</filename>.
- The script creates a <filename>.bbappend</filename> file and a
- directory, which contains a sample patch file.
- If you do not provide a recipe name, the script uses
- "example".
- The script also prompts you for the version of the append file.
- The version should match the recipe to which the append file
- is associated.
- </para>
-
- <para>
- The easiest way to see how the <filename>yocto-layer</filename>
- script works is to experiment with the script.
+ The easiest way to see how the
+ <filename>bitbake-layers create-layer</filename> command
+ works is to experiment with the script.
You can also read the usage information by entering the
following:
<literallayout class='monospaced'>
- $ yocto-layer help
+ $ bitbake-layers create-layer --help
+ NOTE: Starting bitbake server...
+ usage: bitbake-layers create-layer [-h] [--priority PRIORITY]
+ [--example-recipe-name EXAMPLERECIPE]
+ layerdir
+
+ Create a basic layer
+
+ positional arguments:
+ layerdir Layer directory to create
+
+ optional arguments:
+ -h, --help show this help message and exit
+ --priority PRIORITY, -p PRIORITY
+ Layer directory to create
+ --example-recipe-name EXAMPLERECIPE, -e EXAMPLERECIPE
+ Filename of the example recipe
</literallayout>
</para>
<para>
Once you create your general layer, you must add it to your
<filename>bblayers.conf</filename> file.
- Here is an example where a layer named
- <filename>meta-mylayer</filename> is added:
+ You can add your layer by using the
+ <filename>bitbake-layers add-layer</filename> command:
<literallayout class='monospaced'>
- BBLAYERS = ?" \
- /usr/local/src/yocto/meta \
- /usr/local/src/yocto/meta-poky \
- /usr/local/src/yocto/meta-yocto-bsp \
- /usr/local/src/yocto/meta-mylayer \
- "
+ $ bitbake-layers add-layer <replaceable>your_layer_name</replaceable>
+ </literallayout>
+ Here is an example where a layer named
+ <filename>meta-scottrif</filename> is added and then the
+ layers are shown using the
+ <filename>bitbake-layers show-layers</filename> command:
+ <literallayout class='monospaced'>
+ $ bitbake-layers add-layer meta-scottrif
+ NOTE: Starting bitbake server...
+ Loading cache: 100% |############################################| Time: 0:00:00
+ Loaded 1275 entries from dependency cache.
+ Parsing recipes: 100% |##########################################| Time: 0:00:00
+ Parsing of 819 .bb files complete (817 cached, 2 parsed). 1276 targets, 44 skipped, 0 masked, 0 errors.
+ $ bitbake-layers show-layers
+ NOTE: Starting bitbake server...
+ layer path priority
+ ==========================================================================
+ meta /home/scottrif/poky/meta 5
+ meta-poky /home/scottrif/poky/meta-poky 5
+ meta-yocto-bsp /home/scottrif/poky/meta-yocto-bsp 5
+ meta-mylayer /home/scottrif/meta-mylayer 6
+ workspace /home/scottrif/poky/build/workspace 99
+ meta-scottrif /home/scottrif/poky/build/meta-scottrif 6
</literallayout>
Adding the layer to this file enables the build system to
locate the layer during the build.
@@ -1193,7 +1215,7 @@
from within a recipe and using
<filename>EXTRA_IMAGE_FEATURES</filename> from within
your <filename>local.conf</filename> file, which is found in the
- <link linkend='build-directory'>Build Directory</link>.
+ <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
</para>
<para>
@@ -1496,6 +1518,11 @@
similar in function to the recipe you need.
</para></listitem>
</itemizedlist>
+ <note>
+ For information on recipe syntax, see the
+ "<ulink url='&YOCTO_DOCS_REF_URL;#recipe-syntax'>Recipe Syntax</ulink>"
+ section in the Yocto Project Reference Manual.
+ </note>
</para>
<section id='new-recipe-creating-the-base-recipe-using-devtool'>
@@ -1516,8 +1543,9 @@
<para>
You can find a complete description of the
<filename>devtool add</filename> command in the
- "<link linkend='use-devtool-to-integrate-new-code'>Use <filename>devtool add</filename> to Add an Application</link>"
- section.
+ "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-a-closer-look-at-devtool-add'>A Closer Look at <filename>devtool</filename> add</ulink>"
+ section in the Yocto Project Application Development
+ and the Extensible Software Development Kit (eSDK) manual.
</para>
</section>
@@ -1540,12 +1568,10 @@
<para>
To run the tool, you just need to be in your
- <link linkend='build-directory'>Build Directory</link>
+ <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
and have sourced the build environment setup script
(i.e.
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>
- or
- <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
+ <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>).
Here is the basic <filename>recipetool</filename> syntax:
<note>
Running <filename>recipetool -h</filename> or
@@ -1715,303 +1741,6 @@
</itemizedlist>
</section>
- <section id='understanding-recipe-syntax'>
- <title>Understanding Recipe Syntax</title>
-
- <para>
- Understanding recipe file syntax is important for
- writing recipes.
- The following list overviews the basic items that make up a
- BitBake recipe file.
- For more complete BitBake syntax descriptions, see the
- "<ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>"
- chapter of the BitBake User Manual.
- <itemizedlist>
- <listitem><para><emphasis>Variable Assignments and Manipulations:</emphasis>
- Variable assignments allow a value to be assigned to a
- variable.
- The assignment can be static text or might include
- the contents of other variables.
- In addition to the assignment, appending and prepending
- operations are also supported.</para>
- <para>The following example shows some of the ways
- you can use variables in recipes:
- <literallayout class='monospaced'>
- S = "${WORKDIR}/postfix-${PV}"
- CFLAGS += "-DNO_ASM"
- SRC_URI_append = " file://fixup.patch"
- </literallayout>
- </para></listitem>
- <listitem><para><emphasis>