subtree updates

poky: 67266331b0..835f7eac06:
  Adrian Bunk (9):
        valgrind: Remove dependency on libx11
        bluez5: Remove obsolete dependency on dbus-glib
        python3-dbus: Remove obsolete dependency on dbus-glib
        cups: Remove unnecessary dependency on dbus-glib
        libnotify: Remove obsolete dependency on dbus-glib
        unfs3: Switch to new upstream location
        i2c-tools: Add alternative for i2ctransfer
        meta: Remove remnants of bluez4 support
        e2fsprogs: Remove patch that disabled 64bit for ext4 by default

  Adrian Freihofer (1):
        yocto-bsp: runqemu runs beaglebone-yocto

  Adrian Ratiu (1):
        opkg/package/rootfs_ipk: allow overwriting OPKGLIBDIR

  Alejandro del Castillo (1):
        opkg: upgrade to version 0.4.1

  Alexander Kanavin (3):
        rt-tests: exclude 1.4 version from upstream check as well
        gtk-doc: correct the style.css permissions
        mobile-broadband-provider-info: upgrade 20190116 -> 20190618

  Alistair Francis (7):
        mesa: Add support for the lima PACKAGECONFIG
        u-boot: Update to 2019.07
        packagegroup-core-sdk: Set blank sanitiser for RISC-V 32
        opensbi: Update from 0.3 to 0.4
        opensbi: Fix installed-vs-shipped warning
        qemurunner.py: Be more verbose about problems
        package_manager: Ensure the base-feed directory exists

  Andrej Valek (2):
        busybox: 1.30.1 -> 1.31.0
        oe/copy_buildsystem: move layer into layers directory

  Anuj Mittal (25):
        gstreamer1.0-plugins-bad: depend on vulkan-loader now
        vulkan-demos: depend on vulkan-loader
        vulkan: remove
        binutils: fix CVE-2019-12972 CVE-2019-9071
        gnupg: upgrade 2.2.16 -> 2.2.17
        libxslt: fix CVE-2019-13117 CVE-2019-13118
        libva: upgrade 2.4.1 -> 2.5.0
        libva-utils: upgrade 2.4.0 -> 2.5.0
        nasm: fix CVE-2018-19755
        python: fix CVE-2019-9740
        python3: upgrade 3.7.3 -> 3.7.4
        binutils: CVE-2019-9070 is same as CVE-2019-9071
        qemu: fix CVE-2019-12155
        bzip2: upgrade 1.0.7 -> 1.0.8
        glib-2.0: upgrade 2.60.4 -> 2.60.5
        vte: upgrade 0.56.1 -> 0.56.3
        openssl: set CVE vendor to openssl
        curl: upgrade 7.65.1 -> 7.65.2
        rsync: fix CVEs for included zlib
        glibc: CVE-2018-20796 is same as CVE-2019-9169
        unzip: fix CVE-2019-13232
        python: include CVE patches for python-native as well
        gdb: fix CVE-2017-9778
        iptables: upgrade 1.8.2 -> 1.8.3
        piglit: fix SRC_URI

  Armin Kuster (1):
        timezone: update to 2019b

  Bonnans, Laurent (1):
        openssl: fix valgrind errors on v1.1.1c

  Bruce Ashfield (5):
        linux-yocto/5.0: bsp: add basic xilinx zynqmp support
        linux-yocto/5.0: make scsi-debug include scsi core configs
        linux-yocto: bsp/beaglebone: support qemu -machine virt
        linux-yocto/4.19: update to 4.19.57 and -rt22
        package: check PKG_ variables before executing ontarget postinst

  CHerzig@Gauselmann.de (1):
        bitbake: fetch2/clearcase: Fix class import errors

  Changqing Li (5):
        quilt: run-ptest remove Interactive Input
        mdadm: fix systemd service start up failure
        mdam: fix mdmonitor start up failure
        opkg: make ptest output format align with common style
        mdadm: make ptest output format align with common style

  Chee Yang Lee (1):
        wic: add support for kernel with initramfs bundled

  Chen Qi (13):
        target-sdk-provides-dummy: add libperl.so.5 64bit
        devtool: warn user about multiple layer having the same base name
        image.bbclass: fix systemd_preset_all
        devtool.py: track to clean devtool.conf in test_create_workspace
        grub-efi.bbclass: take into consideration of multilib
        sysstat: use service file from source codes
        xmlcatalog: hold libxml2-native dependency
        oeqa/runtime/rpm: ensure no user process running before deleting user
        oeqa/runtime/rpm: Move test_rpm_query_nonroot test case to RpmBasicTest
        qemurunner.py: fix race condition at qemu startup
        msmtp: use alternatives to manage /usr/lib/sendmail
        runtime_test.py: use track_for_cleanup for temp dir
        devtool: remove temp dir in upgrade

  Fabio Berton (1):
        mesa: Update 19.1.0 -> 19.1.1

  Haiqing Bai (1):
        sysstat: Use sysstat.service in source for cron with systemd

  He Zhe (1):
        ltp: file01: Fix in was not recognized

  Hongzhi.Song (3):
        ltp: fix shmctl01 failure when executed.
        ltp: diotest4: Let kernel pick an address when calling mmap
        ltp: getrlimit03: adjust-a-bit-of-code-to-compatiable-with mips32

  Jason Wessel (5):
        glibc: Fix multilibs + usrmerge builds
        psmisc: Fix dependency for USE_NLS=no
        glibc-locale: Fix build error with PACKAGE_NO_GCONV = "1"
        glibc/glibc-locale: Fix do_stash_locale to work with usrmerge and multilibs
        glibc / glibc-locale: Fix stash_locale determinism problems

  Joe Slater (1):
        libtool: remove host information from libtool

  Jon Mason (1):
        oe_syslog.py: Handle syslogd/klogd restart race

  Joshua Watt (5):
        python3: Fix .pyc file reproduciblility
        oeqa: Test bitbake --skip-setsecene
        bitbake: bitbake: Add --skip-setscene option
        classes/icecc: Disable remote pre-processing by default
        scripts/buildstats-diff: Add option to filter tasks

  Joël Esponde (1):
        package.bbclass: fix directories setuid and setgid bits

  Jun Nie (1):
        kernel-fitimage: uboot-sign: fix missing signature

  Kai Kang (4):
        rng-tools: fix rngd blocks system shutdown
        openssl: fix multilib files conflict
        webkitgtk: set incomptible with tune mips
        defaultsetup.conf: enable select init manager

  Khem Raj (10):
        efibootmgr: Pass correct flags to compiler from pkg-config
        mpeg2dec: Fix PIE build and avoid relocation in text section on ARM
        Revert "unzip: fix CVE-2019-13232"
        musl: Upgrade to 1.1.23+
        mdadm: Include sys/sysmacros.h for major/minor definitions
        sysvinit: Include sys/sysmacros.h for major/minor definitions on musl too
        pam_systemd: Include missing.h for secure_getenv
        musl-obstack: Add recipe
        elfutils: Fix eu-* utils builds for musl
        maintainers: Account for musl-obstack and libssp-nonshared

  Li Zhou (2):
        bc: dc: fix exit code of q command
        iptables: Security Advisory - iptables - CVE-2019-11360

  Luca Boccassi (1):
        bitbake: tests/fetch.py: add missing skipIfNoNetwork tags to tests that try to git clone

  Matthias Schiffer (1):
        systemd: backport patch to fix sysctl warning on boot

  Mike Crowe (4):
        bitbake.conf: Stop exporting TARGET_ flags variables
        image.bbclass: Only append to IMAGE_LINK_NAME if it was already set
        rootfs-postcommands: Cope with empty IMAGE_LINK_NAME in write_image_manifest
        rootfs-postcommands: Cope with empty IMAGE_LINK_NAME in write_image_test_data

  Mikko Rapeli (3):
        busybox: enable unicode support
        cve-check.bbclass: initialize to_append
        freetype: add --tag CC to libtool arguments

  Mingli Yu (2):
        go.bbclass: separate the ptest logic to go-ptest class
        mdadm: fix ptest hang

  Oleksandr Kravchuk (34):
        mc: update to 4.8.23
        encodings: update to 1.0.5
        gawk: update to 5.0.1
        libinput: update to 1.13.3
        libxi: update to 1.7.10
        libxt: update to 1.2.0
        autoconf-archive: update to 2019.01.06
        python3-mako: update to 1.0.12
        python3-pbr: update to 5.3.1
        python3-pygobject: update to 3.32.2
        git: update to 2.22.0
        eudev: update to 3.2.8
        babeltrace: update to 1.5.7
        dpkg: update to 1.19.7
        apt: update to 1.2.31
        libinput: update to 1.13.4
        expat: update to 2.2.7
        libsolf: update to 0.7.5
        bison: update to 3.4.1
        ruby: update to 2.5.5
        quilt: update to 0.66
        bzip2: update to 1.0.7
        python3-mako: update to 1.0.13
        ifupdown: update to 0.8.22
        libdrm: update to 2.4.99
        python3-pbr: update to 5.4.0
        linux-firmware: bump to 20190618
        iproute2: update to 5.2.0
        udev-extraconf: do not mount swap partitions
        python3-pbr: update to 5.4.1
        xinput: update to 1.6.3
        python3-scons: update to 3.1.0
        python3-docutils: update to 0.15
        python3-mako: update to 1.0.14

  Pascal Bach (1):
        cmake: 3.14.1 -> 3.14.5

  Paul Eggleton (7):
        libcap-ng: do not use symlink to share files with libcap-ng-python
        scripts/contrib/ddimage: fix typo
        scripts/contrib/ddimage: replace blacklist with mount check
        scripts/contrib/ddimage: be explicit whether device doesn't exist or isn't writeable
        list-packageconfig-flags: print PN instead of P
        recipetool: ignore zero-length setup.py files
        devtool: upgrade: fix handling of errors parsing upgraded recipe

  Peter Kjellerstedt (4):
        glib-2.0: Update to 2.60.4
        glibc-package.inc: Do not use bitbake variable syntax for shell variables
        meson.bbclass: Remove the MESON_*_ARGS variables
        nativesdk-meson: Remove some unused variables

  Pierre Le Magourou (10):
        cve-update-db: Use std library instead of urllib3
        cve-update-db: Manage proxy if needed.
        cve-update-db: do_populate_cve_db depends on do_fetch
        cve-update-db: Catch request.urlopen errors.
        cve-check: Depends on cve-update-db-native
        cve-update-db: Use NVD CPE data to populate PRODUCTS table
        cve-check: Update unpatched CVE matching
        cve-update-db-native: Skip recipe when cve-check class is not loaded.
        cve-check: Replace CVE_CHECK_CVE_WHITELIST by CVE_CHECK_WHITELIST
        cve-update-db-native: Remove hash column from database.

  Ricardo Ribalda Delgado (4):
        nfs-mountd: Add missing dependency on systemd service
        systemd: Fix interface bring-up on kernels >= 5.2
        wic: Fix (again) partition files UIDs on multi rootfs images
        systemd-bootconf: Mark as machine specific

  Ricardo Salveti (1):
        gcc-9.1: add back GLIBC_DYNAMIC_LINKER riscv changes

  Richard Purdie (58):
        multilib_global: Fix multilib rebuild issue
        multilib_global: Fix KERNEL_VERSION expansion problems
        sysklogd: Fix init script races
        busybox: Improve syslog restart handling
        oeqa/runtime/syslog: Improve test debug messages
        oeqa/runtime/oesyslog: systemd syslog restart doesn't change pid
        oeqa/runtime/syslog: Add delay to test to avoid failures
        busybox: Fix typo in syslog initscript
        pigz: Add debug for autobuilder errors
        staging: Code cleanup
        package: Build pkgdata specific to the current recipe
        Revert "pigz: Add debug for autobuilder errors"
        grub2: Drop unneeded code
        bitbake: event: Clear ui_queue after handling it
        bitbake: main: Ensure log messages are printed when no UI starts
        bitbake: main: Alter EOFError handling
        core-image-sato-sdk-ptest: Reduce image padding size due to bootimg 4GB limit
        oeqa/bbtests: Tweak test bitbake output pattern matching
        sstate: Add tweak to avoid multiple sstate stats messages
        bitbake: siggen: Fix default handler
        bitbake: siggen: Use unique hashes for tasks
        bitbake: runqueue: Tweak buildable variable handling in scheduler
        bitbake: runqueue: Drop unused BB_SETSCENE_VERIFY_FUNCTION2
        bitbake: runqueue: Remove now uneeded code
        bitbake: runqueue: Move scenequeue data generation to a separate function
        bitbake: runqueue: Remove unused function parameter
        bitbake: runqueue: Factor out the process_setscene_whitelist checks
        bitbake: runqueue: Uniquely namespace the scenequeue functions
        bitbake: runqueue: Merge stats handling together for setscene/real tasks
        bitbake: runqueue: Merge scenequeue and real task queue code together
        bitbake: runqueue: Fix counter/task updating glitch
        bitbake: runqueue: Remove RunQueueExecuteScenequeue and RunQueueExecuteTasks
        bitbake: runqueue: Simplify _execute_runqueue logic
        bitbake: runqueue: Fold remains of the scenequeue setup into RunQueueExecute
        bitbake: event/runqueue: Drop StampUpdate event, its pointless/unused
        bitbake: runqueue: Add covered_tasks (or 'collated_deps') to scenequeue data
        bitbake: runqueue: Simplify scenequeue unskippable calculation
        bitbake: runqueue: Tweak comments and debug code
        bitbake: runqueue: Code simplification
        bitbake: runqueue: Remove pointless variable
        bitbake: runqueue: Further scheduler buildable tasks cleanup
        bitbake: runqueue: Clarify scenequeue_covered vs. tasks_covered
        bitbake: runqueue: Merge the queues and execute setscene and normal tasks in parallel
        bitbake: runqueue: Alter setscenewhitelist handling
        bitbake: runqueue: Complete the merge of scenequeue and normal task execution
        bitbake: tests: Add initial scenario based test for runqueue
        bitbake: uihelper: No longer listen to scenequeue task started
        bitbake: runqueue: Simplify some convoluted logic
        bitbake: runqueue: Whitespace fix
        bitbake: runqueue: Abstract hash verification function
        bitbake: runqueue: Optimise multiconfig with overlapping setscene
        bitbake: tests/runqueue: Allow common sstate tasks to become valid
        bitbake: runqueue: Fix non setscene tasks targets being lost
        staging: Drop clean_recipe_sysroot
        poky-lsb: Drop features already in poky
        poky-lsb: Drop libx11 PREFERRED_PROVIDER
        distro/include: Add poky-distro-alt-test-config.inc
        bitbake: siggen: Fix handling of tainted sig files

  Robert Yang (13):
        update-alternatives.bbclass: run update-alternatives firstly in postinst script
        busybox: make postinst run firstly before update-alternatives
        multilib.bbclass: Reduce ALTERNATIVE_PRIORITY for extended recipes
        bitbake: bitbake: lib: Cleanup /usr/bin/env python
        bitbake: bitbake: toaster:tests: python -> python3
        ksum.py: python -> python3
        wic: python2 -> python3
        ext-sdk-prepare.py: python2 -> python3
        oeqa: Cleanup /usr/bin/env python
        package_rpm.bbclass: python2 -> python3
        bitbake: cache: Remove duplicated lines for provides and rprovides
        bitbake: cache: Set packages for skipped recipes
        bitbake: cache: Create a symlink for current cachefile

  Ross Burton (56):
        cve-check: be idiomatic
        gtk-icon-cache: rename intercept to update_gtk_icon_cache
        fortran-helloworld: add a very dumb Fortran Hello World for testing
        oeqa/buildoptions: check that Fortran code actually cross-compiles
        buildhistory: write the contents of the sysroot
        buildhistory: report sysroot changes
        perl: fix Upstream-Status tags
        efivar: ensure that target security flags are not used to build native code
        multilib_script: fix whitespace
        buildhistory_analysis: ignore ownership for sysroot diffs
        insane: use clean_path for the host contamination warnings
        libsndfile1: disable use of sqlite3 by default
        libsndfile1: remove redundant autoconf seeding
        buildhistory: don't output ownership for the sysroot
        buildhistory: filter out the unexpected prefix for native/cross sysroots
        alsa-utils: disable tools using GTK+2
        packagegroup-core-lsb: remove GTK+
        recipetool: add MD5 hash for the line-wrapped MPL-1.1 license
        oeqa/recipetool: change the CMake test to use taglib
        gtk+: remove GTK+ 2
        gnome-themes-standard: remove
        Revert "sysstat: use service file from source codes"
        libpsl: update Upstream-Status
        grub: build with python 3
        qemu: use Python 3 to build
        ninja: use Python 3
        conf/poky: add debian-10 to the supported distribution list
        tiff: remove redundant patch
        tiff: fix CVE-2019-6128
        tiff: fix CVE-2019-7663
        cve-check: remove redundant readline CVE whitelisting
        cve-check-tool: remove
        glibc: exclude child recipes from CVE scanning
        libid3tag: CVE-2017-11551 is the same as CVE-2004-2779
        libid3tag: handle unknown encodings (CVE-2017-11550)
        subversion: set CVE vendor to Apache
        boost: set CVE vendor to Boost
        git: set CVE vendor to git-scm
        ed: set CVE vendor to avoid false positives
        cve-check: allow comparison of Vendor as well as Product
        flex: set CVE_PRODUCT to include vendor
        cve-update-db-native: use SQL placeholders instead of format strings
        xkeyboard-config: remove redundant intltool dependency
        piglit: upgrade to latest revision
        pkgconf: upgrade 1.6.1 -> 1.6.3
        conf/poky: add Fedora 30 and Opensuse Leap 15.1 to supported distributions
        cve-update-db-native: use os.path.join instead of +
        cve-update-db: actually inherit native
        cve-update-db-native: use executemany() to optimise CPE insertion
        cve-update-db-native: improve metadata parsing
        cve-update-db-native: clean up JSON fetching
        freetype: upgrade to 2.10.1
        unfs3: set upstream tag regex to avoid false-positives
        meson.bbclass: export STRIP=${BUILD_STRIP}
        ffmpeg: don't use hardcoded lookup tables
        ffmpeg: upgrade to 4.1.4

  Sai Hari Chandana Kalluri (3):
        devtool/standard.py: Update devtool modify to copy source from work-shared if its already downloaded
        devtool/standard.py: Create a copy of kernel source within work-shared if not present
        devtool: provide support for devtool menuconfig command

  Scott Rifenbark (5):
        overview-manual: Fixed manual history table
        sdk-manual: Updated devtool to talk about oe-local-files.
        dev-manual: Provided proper link title
        ref-manual: Fixed typo for BBMULTICONFIG variable.
        ref-manual: Removed "python2" mention in example.

  Stefan Agner (1):
        psplash: create psplash tmpfs mount directory in psplash-init

  Tim Orling (3):
        vulkan-headers: add recipe
        vulkan-loader: add recipe
        vulkan-tools: add recipe

  Ulrich Ölmann (1):
        squashfs-tools: upgrade to commit f95864afe883

  William Bourque (2):
        wic/plugins: Source that support both EFI and BIOS
        meta/lib/oeqa: Test for bootimg-biosplusefi Source

  Yi Zhao (2):
        debianutils: upgrade 4.8.6.1 -> 4.8.6.3
        ltp: upgrade 20190115 -> 20190517

  Zang Ruochen (9):
        nss: upgrade 3.44 -> 3.44.1
        util-linux:upgrade 2.33.2 -> 2.34
        librepo:upgrade 1.10.3 -> 1.10.4
        sqlite3: Upgrade 3.28.0 -> 3.29.0
        nss: Upgrade 3.44.1 -> 3.45
        xauth:upgrade 1.0.10 -> 1.1
        libice:upgrade 1.0.9 -> 1.0.10
        xwininfo:upgrade 1.1.4 -> 1.1.5
        libpciaccess:upgrade 0.14 -> 0.16

meta-phosphor: fe8cee7488..601f253a66:
  Brad Bishop (1):
        meta-phosphor: systemd: remove upstreamed patches

Change-Id: If591144821cd2e5b990a7aa49a1cf426f6a906de
Signed-off-by: Brad Bishop <bradleyb@fuzziesquirrel.com>
diff --git a/poky/bitbake/bin/bitbake-selftest b/poky/bitbake/bin/bitbake-selftest
index 20553e9..041a271 100755
--- a/poky/bitbake/bin/bitbake-selftest
+++ b/poky/bitbake/bin/bitbake-selftest
@@ -25,6 +25,7 @@
          "bb.tests.fetch",
          "bb.tests.parse",
          "bb.tests.persist_data",
+         "bb.tests.runqueue",
          "bb.tests.utils",
          "hashserv.tests",
          "layerindexlib.tests.layerindexobj",
diff --git a/poky/bitbake/lib/bb/cache.py b/poky/bitbake/lib/bb/cache.py
index 5fb2f17..ab18dd5 100644
--- a/poky/bitbake/lib/bb/cache.py
+++ b/poky/bitbake/lib/bb/cache.py
@@ -83,20 +83,20 @@
         self.appends = self.listvar('__BBAPPEND', metadata)
         self.nocache = self.getvar('BB_DONT_CACHE', metadata)
 
-        self.skipreason = self.getvar('__SKIPPED', metadata)
-        if self.skipreason:
-            self.pn = self.getvar('PN', metadata) or bb.parse.vars_from_file(filename,metadata)[0]
-            self.skipped = True
-            self.provides  = self.depvar('PROVIDES', metadata)
-            self.rprovides = self.depvar('RPROVIDES', metadata)
-            return
-
-        self.tasks = metadata.getVar('__BBTASKS', False)
-
-        self.pn = self.getvar('PN', metadata)
+        self.provides  = self.depvar('PROVIDES', metadata)
+        self.rprovides = self.depvar('RPROVIDES', metadata)
+        self.pn = self.getvar('PN', metadata) or bb.parse.vars_from_file(filename,metadata)[0]
         self.packages = self.listvar('PACKAGES', metadata)
         if not self.packages:
             self.packages.append(self.pn)
+        self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
+
+        self.skipreason = self.getvar('__SKIPPED', metadata)
+        if self.skipreason:
+            self.skipped = True
+            return
+
+        self.tasks = metadata.getVar('__BBTASKS', False)
 
         self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
         self.hashfilename = self.getvar('BB_HASHFILENAME', metadata)
@@ -113,11 +113,8 @@
         self.stampclean = self.getvar('STAMPCLEAN', metadata)
         self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
         self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
-        self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
         self.depends          = self.depvar('DEPENDS', metadata)
-        self.provides         = self.depvar('PROVIDES', metadata)
         self.rdepends         = self.depvar('RDEPENDS', metadata)
-        self.rprovides        = self.depvar('RPROVIDES', metadata)
         self.rrecommends      = self.depvar('RRECOMMENDS', metadata)
         self.rprovides_pkg    = self.pkgvar('RPROVIDES', self.packages, metadata)
         self.rdepends_pkg     = self.pkgvar('RDEPENDS', self.packages, metadata)
@@ -399,6 +396,15 @@
         else:
             logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
 
+        # We don't use the symlink, its just for debugging convinience
+        symlink = os.path.join(self.cachedir, "bb_cache.dat")
+        if os.path.exists(symlink):
+            bb.utils.remove(symlink)
+        try:
+            os.symlink(os.path.basename(self.cachefile), symlink)
+        except OSError:
+            pass
+
     def load_cachefile(self):
         cachesize = 0
         previous_progress = 0
diff --git a/poky/bitbake/lib/bb/cooker.py b/poky/bitbake/lib/bb/cooker.py
index 0008c2f..b4851e1 100644
--- a/poky/bitbake/lib/bb/cooker.py
+++ b/poky/bitbake/lib/bb/cooker.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 #
 # Copyright (C) 2003, 2004  Chris Larson
 # Copyright (C) 2003, 2004  Phil Blundell
diff --git a/poky/bitbake/lib/bb/cookerdata.py b/poky/bitbake/lib/bb/cookerdata.py
index 842275d..144e803 100644
--- a/poky/bitbake/lib/bb/cookerdata.py
+++ b/poky/bitbake/lib/bb/cookerdata.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 #
 # Copyright (C) 2003, 2004  Chris Larson
 # Copyright (C) 2003, 2004  Phil Blundell
@@ -122,6 +121,7 @@
         self.profile = False
         self.nosetscene = False
         self.setsceneonly = False
+        self.skipsetscene = False
         self.invalidate_stamp = False
         self.dump_signatures = []
         self.dry_run = False
diff --git a/poky/bitbake/lib/bb/event.py b/poky/bitbake/lib/bb/event.py
index 7cbb5ca..d44621e 100644
--- a/poky/bitbake/lib/bb/event.py
+++ b/poky/bitbake/lib/bb/event.py
@@ -124,6 +124,7 @@
 ui_queue = []
 @atexit.register
 def print_ui_queue():
+    global ui_queue
     """If we're exiting before a UI has been spawned, display any queued
     LogRecords to the console."""
     logger = logging.getLogger("BitBake")
@@ -168,6 +169,7 @@
             logger.removeHandler(stderr)
         else:
             logger.removeHandler(stdout)
+        ui_queue = []
 
 def fire_ui_handlers(event, d):
     global _thread_lock
@@ -402,23 +404,6 @@
 class RecipeParsed(RecipeEvent):
     """ Recipe Parsing Complete """
 
-class StampUpdate(Event):
-    """Trigger for any adjustment of the stamp files to happen"""
-
-    def __init__(self, targets, stampfns):
-        self._targets = targets
-        self._stampfns = stampfns
-        Event.__init__(self)
-
-    def getStampPrefix(self):
-        return self._stampfns
-
-    def getTargets(self):
-        return self._targets
-
-    stampPrefix = property(getStampPrefix)
-    targets = property(getTargets)
-
 class BuildBase(Event):
     """Base class for bitbake build events"""
 
diff --git a/poky/bitbake/lib/bb/fetch2/clearcase.py b/poky/bitbake/lib/bb/fetch2/clearcase.py
index 9ed0d9b..3dd93ad 100644
--- a/poky/bitbake/lib/bb/fetch2/clearcase.py
+++ b/poky/bitbake/lib/bb/fetch2/clearcase.py
@@ -54,6 +54,8 @@
 import bb
 from   bb.fetch2 import FetchMethod
 from   bb.fetch2 import FetchError
+from   bb.fetch2 import MissingParameterError
+from   bb.fetch2 import ParameterError
 from   bb.fetch2 import runfetchcmd
 from   bb.fetch2 import logger
 
@@ -79,7 +81,7 @@
         if 'protocol' in ud.parm:
             ud.proto = ud.parm['protocol']
         if not ud.proto in ('http', 'https'):
-            raise fetch2.ParameterError("Invalid protocol type", ud.url)
+            raise ParameterError("Invalid protocol type", ud.url)
 
         ud.vob = ''
         if 'vob' in ud.parm:
diff --git a/poky/bitbake/lib/bb/main.py b/poky/bitbake/lib/bb/main.py
index ca59eb9..af2880f 100755
--- a/poky/bitbake/lib/bb/main.py
+++ b/poky/bitbake/lib/bb/main.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 #
 # Copyright (C) 2003, 2004  Chris Larson
 # Copyright (C) 2003, 2004  Phil Blundell
@@ -255,6 +254,11 @@
                           help="Do not run any setscene tasks. sstate will be ignored and "
                                "everything needed, built.")
 
+        parser.add_option("", "--skip-setscene", action="store_true",
+                          dest="skipsetscene", default=False,
+                          help="Skip setscene tasks if they would be executed. Tasks previously "
+                               "restored from sstate will be kept, unlike --no-setscene")
+
         parser.add_option("", "--setscene-only", action="store_true",
                           dest="setsceneonly", default=False,
                           help="Only run setscene tasks, don't run any real tasks.")
@@ -448,12 +452,7 @@
                             bb.utils.unlockfile(lock)
                         raise bb.server.process.ProcessTimeout("Bitbake still shutting down as socket exists but no lock?")
                 if not configParams.server_only:
-                    try:
-                        server_connection = bb.server.process.connectProcessServer(sockname, featureset)
-                    except EOFError:
-                        # The server may have been shutting down but not closed the socket yet. If that happened,
-                        # ignore it.
-                        pass
+                    server_connection = bb.server.process.connectProcessServer(sockname, featureset)
 
                 if server_connection or configParams.server_only:
                     break
@@ -464,12 +463,13 @@
                     raise
                 retries -= 1
                 tryno = 8 - retries
-                if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError)):
+                if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError, EOFError)):
                     logger.info("Retrying server connection (#%d)..." % tryno)
                 else:
                     logger.info("Retrying server connection (#%d)... (%s)" % (tryno, traceback.format_exc()))
             if not retries:
-                bb.fatal("Unable to connect to bitbake server, or start one")
+                bb.fatal("Unable to connect to bitbake server, or start one (server startup failures would be in bitbake-cookerdaemon.log).")
+            bb.event.print_ui_queue()
             if retries < 5:
                 time.sleep(5)
 
diff --git a/poky/bitbake/lib/bb/monitordisk.py b/poky/bitbake/lib/bb/monitordisk.py
index 69b25c7..1a25b00 100644
--- a/poky/bitbake/lib/bb/monitordisk.py
+++ b/poky/bitbake/lib/bb/monitordisk.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 #
 # Copyright (C) 2012 Robert Yang
 #
diff --git a/poky/bitbake/lib/bb/namedtuple_with_abc.py b/poky/bitbake/lib/bb/namedtuple_with_abc.py
index c8e1d55..646aed6 100644
--- a/poky/bitbake/lib/bb/namedtuple_with_abc.py
+++ b/poky/bitbake/lib/bb/namedtuple_with_abc.py
@@ -1,5 +1,4 @@
 # http://code.activestate.com/recipes/577629-namedtupleabc-abstract-base-class-mix-in-for-named/
-#!/usr/bin/env python
 # Copyright (c) 2011 Jan Kaliszewski (zuo). Available under the MIT License.
 #
 # SPDX-License-Identifier: MIT
diff --git a/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py b/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
index 889f230..6f7cf82 100644
--- a/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
+++ b/poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 """
    class for handling .bb files
 
diff --git a/poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py b/poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
index 71bf61b..2e84b91 100644
--- a/poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
+++ b/poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 """
    class for handling configuration data files
 
diff --git a/poky/bitbake/lib/bb/parse/parse_py/__init__.py b/poky/bitbake/lib/bb/parse/parse_py/__init__.py
index cdebe44..f508afa 100644
--- a/poky/bitbake/lib/bb/parse/parse_py/__init__.py
+++ b/poky/bitbake/lib/bb/parse/parse_py/__init__.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 """
 BitBake Parsers
 
diff --git a/poky/bitbake/lib/bb/runqueue.py b/poky/bitbake/lib/bb/runqueue.py
index 010b085..6a2de24 100644
--- a/poky/bitbake/lib/bb/runqueue.py
+++ b/poky/bitbake/lib/bb/runqueue.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 """
 BitBake 'RunQueue' implementation
 
@@ -26,6 +25,7 @@
 import pickle
 from multiprocessing import Process
 import shlex
+import pprint
 
 bblogger = logging.getLogger("BitBake")
 logger = logging.getLogger("BitBake.RunQueue")
@@ -68,6 +68,14 @@
         return "mc:" + mc + ":" + fn + ":" + taskname
     return fn + ":" + taskname
 
+# Index used to pair up potentially matching multiconfig tasks
+# We match on PN, taskname and hash being equal
+def pending_hash_index(tid, rqdata):
+    (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+    pn = rqdata.dataCaches[mc].pkg_fn[taskfn]
+    h = rqdata.runtaskentries[tid].hash
+    return pn + ":" + "taskname" + h
+
 class RunQueueStats:
     """
     Holds statistics on the tasks handled by the associated runQueue
@@ -103,8 +111,6 @@
 # runQueue state machine
 runQueuePrepare = 2
 runQueueSceneInit = 3
-runQueueSceneRun = 4
-runQueueRunInit = 5
 runQueueRunning = 6
 runQueueFailed = 7
 runQueueCleanUp = 8
@@ -143,7 +149,8 @@
         Return the id of the first task we find that is buildable
         """
         self.buildable = [x for x in self.buildable if x not in self.rq.runq_running]
-        if not self.buildable:
+        buildable = [x for x in self.buildable if (x in self.rq.tasks_covered or x in self.rq.tasks_notcovered)]
+        if not buildable:
             return None
 
         # Filter out tasks that have a max number of threads that have been exceeded
@@ -159,8 +166,8 @@
             else:
                 skip_buildable[rtaskname] = 1
 
-        if len(self.buildable) == 1:
-            tid = self.buildable[0]
+        if len(buildable) == 1:
+            tid = buildable[0]
             taskname = taskname_from_tid(tid)
             if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
                 return None
@@ -175,7 +182,7 @@
 
         best = None
         bestprio = None
-        for tid in self.buildable:
+        for tid in buildable:
             taskname = taskname_from_tid(tid)
             if taskname in skip_buildable and skip_buildable[taskname] >= int(self.skip_maxthread[taskname]):
                 continue
@@ -1194,7 +1201,6 @@
 
         self.stamppolicy = cfgData.getVar("BB_STAMP_POLICY") or "perfile"
         self.hashvalidate = cfgData.getVar("BB_HASHCHECK_FUNCTION") or None
-        self.setsceneverify = cfgData.getVar("BB_SETSCENE_VERIFY_FUNCTION2") or None
         self.depvalidate = cfgData.getVar("BB_SETSCENE_DEPVALID") or None
 
         self.state = runQueuePrepare
@@ -1203,7 +1209,7 @@
         # Invoked at regular time intervals via the bitbake heartbeat event
         # while the build is running. We generate a unique name for the handler
         # here, just in case that there ever is more than one RunQueue instance,
-        # start the handler when reaching runQueueSceneRun, and stop it when
+        # start the handler when reaching runQueueSceneInit, and stop it when
         # done with the build.
         self.dm = monitordisk.diskMonitor(cfgData)
         self.dm_event_handler_name = '_bb_diskmonitor_' + str(id(self))
@@ -1378,10 +1384,43 @@
             cache[tid] = iscurrent
         return iscurrent
 
-    def validate_hash(self, *, sq_fn, sq_task, sq_hash, sq_hashfn, siginfo, sq_unihash, d):
+    def validate_hashes(self, tocheck, data, presentcount=None, siginfo=False):
+        valid = set()
+        if self.hashvalidate:
+            sq_hash = []
+            sq_hashfn = []
+            sq_unihash = []
+            sq_fn = []
+            sq_taskname = []
+            sq_task = []
+            for tid in tocheck:
+                (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+
+                sq_fn.append(fn)
+                sq_hashfn.append(self.rqdata.dataCaches[mc].hashfn[taskfn])
+                sq_hash.append(self.rqdata.runtaskentries[tid].hash)
+                sq_unihash.append(self.rqdata.runtaskentries[tid].unihash)
+                sq_taskname.append(taskname)
+                sq_task.append(tid)
+
+            if presentcount is not None:
+                data.setVar("BB_SETSCENE_STAMPCURRENT_COUNT", presentcount)
+
+            valid_ids = self.validate_hash(sq_fn, sq_taskname, sq_hash, sq_hashfn, siginfo, sq_unihash, data, presentcount)
+
+            if presentcount is not None:
+                data.delVar("BB_SETSCENE_STAMPCURRENT_COUNT")
+
+            for v in valid_ids:
+                valid.add(sq_task[v])
+
+        return valid
+
+    def validate_hash(self, sq_fn, sq_task, sq_hash, sq_hashfn, siginfo, sq_unihash, d, presentcount):
         locs = {"sq_fn" : sq_fn, "sq_task" : sq_task, "sq_hash" : sq_hash, "sq_hashfn" : sq_hashfn,
                 "sq_unihash" : sq_unihash, "siginfo" : siginfo, "d" : d}
 
+        # Backwards compatibility
         hashvalidate_args = ("(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=siginfo, sq_unihash=sq_unihash)",
                              "(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=siginfo)",
                              "(sq_fn, sq_task, sq_hash, sq_hashfn, d)")
@@ -1408,7 +1447,6 @@
         retval = True
 
         if self.state is runQueuePrepare:
-            self.rqexe = RunQueueExecuteDummy(self)
             # NOTE: if you add, remove or significantly refactor the stages of this
             # process then you should recalculate the weightings here. This is quite
             # easy to do - just change the next line temporarily to pass debug=True as
@@ -1422,18 +1460,19 @@
                 self.state = runQueueComplete
             else:
                 self.state = runQueueSceneInit
-                self.rqdata.init_progress_reporter.next_stage()
-
-                # we are ready to run,  emit dependency info to any UI or class which
-                # needs it
-                depgraph = self.cooker.buildDependTree(self, self.rqdata.taskData)
-                self.rqdata.init_progress_reporter.next_stage()
-                bb.event.fire(bb.event.DepTreeGenerated(depgraph), self.cooker.data)
 
         if self.state is runQueueSceneInit:
+            self.rqdata.init_progress_reporter.next_stage()
+
+            # we are ready to run,  emit dependency info to any UI or class which
+            # needs it
+            depgraph = self.cooker.buildDependTree(self, self.rqdata.taskData)
+            self.rqdata.init_progress_reporter.next_stage()
+            bb.event.fire(bb.event.DepTreeGenerated(depgraph), self.cooker.data)
+
             if not self.dm_event_handler_registered:
                  res = bb.event.register(self.dm_event_handler_name,
-                                         lambda x: self.dm.check(self) if self.state in [runQueueSceneRun, runQueueRunning, runQueueCleanUp] else False,
+                                         lambda x: self.dm.check(self) if self.state in [runQueueRunning, runQueueCleanUp] else False,
                                          ('bb.event.HeartbeatEvent',))
                  self.dm_event_handler_registered = True
 
@@ -1446,24 +1485,23 @@
                 if 'printdiff' in dump:
                     self.write_diffscenetasks(invalidtasks)
                 self.state = runQueueComplete
-            else:
-                self.rqdata.init_progress_reporter.next_stage()
-                self.start_worker()
-                self.rqdata.init_progress_reporter.next_stage()
-                self.rqexe = RunQueueExecuteScenequeue(self)
 
-        if self.state is runQueueSceneRun:
-            retval = self.rqexe.execute()
+        if self.state is runQueueSceneInit:
+            self.rqdata.init_progress_reporter.next_stage()
+            self.start_worker()
+            self.rqdata.init_progress_reporter.next_stage()
+            self.rqexe = RunQueueExecute(self)
 
-        if self.state is runQueueRunInit:
-            if self.cooker.configuration.setsceneonly:
-                self.state = runQueueComplete
-            else:
-                # Just in case we didn't setscene
-                self.rqdata.init_progress_reporter.finish()
-                logger.info("Executing RunQueue Tasks")
-                self.rqexe = RunQueueExecuteTasks(self)
-                self.state = runQueueRunning
+            # If we don't have any setscene functions, skip execution
+            if len(self.rqdata.runq_setscene_tids) == 0:
+                logger.info('No setscene tasks')
+                for tid in self.rqdata.runtaskentries:
+                    if len(self.rqdata.runtaskentries[tid].depends) == 0:
+                        self.rqexe.setbuildable(tid)
+                    self.rqexe.tasks_notcovered.add(tid)
+                self.rqexe.sqdone = True
+            logger.info('Executing Tasks')
+            self.state = runQueueRunning
 
         if self.state is runQueueRunning:
             retval = self.rqexe.execute()
@@ -1479,11 +1517,12 @@
 
         if build_done and self.rqexe:
             self.teardown_workers()
-            if self.rqexe.stats.failed:
-                logger.info("Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed.", self.rqexe.stats.completed + self.rqexe.stats.failed, self.rqexe.stats.skipped, self.rqexe.stats.failed)
-            else:
-                # Let's avoid the word "failed" if nothing actually did
-                logger.info("Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and all succeeded.", self.rqexe.stats.completed, self.rqexe.stats.skipped)
+            if self.rqexe:
+                if self.rqexe.stats.failed:
+                    logger.info("Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and %d failed.", self.rqexe.stats.completed + self.rqexe.stats.failed, self.rqexe.stats.skipped, self.rqexe.stats.failed)
+                else:
+                    # Let's avoid the word "failed" if nothing actually did
+                    logger.info("Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and all succeeded.", self.rqexe.stats.completed, self.rqexe.stats.skipped)
 
         if self.state is runQueueFailed:
             raise bb.runqueue.TaskFailure(self.rqexe.failed_tids)
@@ -1566,16 +1605,8 @@
 
     def print_diffscenetasks(self):
 
-        valid = []
-        sq_hash = []
-        sq_hashfn = []
-        sq_unihash = []
-        sq_fn = []
-        sq_taskname = []
-        sq_task = []
         noexec = []
-        stamppresent = []
-        valid_new = set()
+        tocheck = set()
 
         for tid in self.rqdata.runtaskentries:
             (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
@@ -1585,18 +1616,9 @@
                 noexec.append(tid)
                 continue
 
-            sq_fn.append(fn)
-            sq_hashfn.append(self.rqdata.dataCaches[mc].hashfn[taskfn])
-            sq_hash.append(self.rqdata.runtaskentries[tid].hash)
-            sq_unihash.append(self.rqdata.runtaskentries[tid].unihash)
-            sq_taskname.append(taskname)
-            sq_task.append(tid)
+            tocheck.add(tid)
 
-        valid = self.validate_hash(sq_fn=sq_fn, sq_task=sq_taskname, sq_hash=sq_hash, sq_hashfn=sq_hashfn,
-                siginfo=True, sq_unihash=sq_unihash, d=self.cooker.data)
-
-        for v in valid:
-            valid_new.add(sq_task[v])
+        valid_new = self.validate_hashes(tocheck, self.cooker.data, None, True)
 
         # Tasks which are both setscene and noexec never care about dependencies
         # We therefore find tasks which are setscene and noexec and mark their
@@ -1680,6 +1702,7 @@
                 output = bb.siggen.compare_sigfiles(latestmatch, match, recursecb)
                 bb.plain("\nTask %s:%s couldn't be used from the cache because:\n  We need hash %s, closest matching task was %s\n  " % (pn, taskname, h, prevh) + '\n  '.join(output))
 
+
 class RunQueueExecute:
 
     def __init__(self, rq):
@@ -1691,6 +1714,10 @@
         self.number_tasks = int(self.cfgData.getVar("BB_NUMBER_THREADS") or 1)
         self.scheduler = self.cfgData.getVar("BB_SCHEDULER") or "speed"
 
+        self.sq_buildable = set()
+        self.sq_running = set()
+        self.sq_live = set()
+
         self.runq_buildable = set()
         self.runq_running = set()
         self.runq_complete = set()
@@ -1698,9 +1725,15 @@
         self.build_stamps = {}
         self.build_stamps2 = []
         self.failed_tids = []
+        self.sq_deferred = {}
 
         self.stampcache = {}
 
+        self.sqdone = False
+
+        self.stats = RunQueueStats(len(self.rqdata.runtaskentries))
+        self.sq_stats = RunQueueStats(len(self.rqdata.runq_setscene_tids))
+
         for mc in rq.worker:
             rq.worker[mc].pipe.setrunqueueexec(self)
         for mc in rq.fakeworker:
@@ -1709,6 +1742,31 @@
         if self.number_tasks <= 0:
              bb.fatal("Invalid BB_NUMBER_THREADS %s" % self.number_tasks)
 
+        # List of setscene tasks which we've covered
+        self.scenequeue_covered = set()
+        # List of tasks which are covered (including setscene ones)
+        self.tasks_covered = set()
+        self.tasks_scenequeue_done = set()
+        self.scenequeue_notcovered = set()
+        self.tasks_notcovered = set()
+        self.scenequeue_notneeded = set()
+
+        self.coveredtopocess = set()
+
+        schedulers = self.get_schedulers()
+        for scheduler in schedulers:
+            if self.scheduler == scheduler.name:
+                self.sched = scheduler(self, self.rqdata)
+                logger.debug(1, "Using runqueue scheduler '%s'", scheduler.name)
+                break
+        else:
+            bb.fatal("Invalid scheduler '%s'.  Available schedulers: %s" %
+                     (self.scheduler, ", ".join(obj.name for obj in schedulers)))
+
+        if len(self.rqdata.runq_setscene_tids) > 0:
+            self.sqdata = SQData()
+            build_scenequeue_data(self.sqdata, self.rqdata, self.rq, self.cooker, self.stampcache, self)
+
     def runqueue_process_waitpid(self, task, status):
 
         # self.build_stamps[pid] may not exist when use shared work directory.
@@ -1716,10 +1774,17 @@
             self.build_stamps2.remove(self.build_stamps[task])
             del self.build_stamps[task]
 
-        if status != 0:
-            self.task_fail(task, status)
+        if task in self.sq_live:
+            if status != 0:
+                self.sq_task_fail(task, status)
+            else:
+                self.sq_task_complete(task)
+            self.sq_live.remove(task)
         else:
-            self.task_complete(task)
+            if status != 0:
+                self.task_fail(task, status)
+            else:
+                self.task_complete(task)
         return True
 
     def finish_now(self):
@@ -1748,8 +1813,9 @@
     def finish(self):
         self.rq.state = runQueueCleanUp
 
-        if self.stats.active > 0:
-            bb.event.fire(runQueueExitWait(self.stats.active), self.cfgData)
+        active = self.stats.active + self.sq_stats.active
+        if active > 0:
+            bb.event.fire(runQueueExitWait(active), self.cfgData)
             self.rq.read_workers()
             return self.rq.active_fds()
 
@@ -1760,7 +1826,8 @@
         self.rq.state = runQueueComplete
         return True
 
-    def check_dependencies(self, task, taskdeps, setscene = False):
+    # Used by setscene only
+    def check_dependencies(self, task, taskdeps):
         if not self.rq.depvalidate:
             return False
 
@@ -1776,121 +1843,10 @@
         return valid
 
     def can_start_task(self):
-        can_start = self.stats.active < self.number_tasks
+        active = self.stats.active + self.sq_stats.active
+        can_start = active < self.number_tasks
         return can_start
 
-class RunQueueExecuteDummy(RunQueueExecute):
-    def __init__(self, rq):
-        self.rq = rq
-        self.stats = RunQueueStats(0)
-
-    def finish(self):
-        self.rq.state = runQueueComplete
-        return
-
-class RunQueueExecuteTasks(RunQueueExecute):
-    def __init__(self, rq):
-        RunQueueExecute.__init__(self, rq)
-
-        self.stats = RunQueueStats(len(self.rqdata.runtaskentries))
-
-        self.stampcache = {}
-
-        initial_covered = self.rq.scenequeue_covered.copy()
-
-        # Mark initial buildable tasks
-        for tid in self.rqdata.runtaskentries:
-            if len(self.rqdata.runtaskentries[tid].depends) == 0:
-                self.runq_buildable.add(tid)
-            if len(self.rqdata.runtaskentries[tid].revdeps) > 0 and self.rqdata.runtaskentries[tid].revdeps.issubset(self.rq.scenequeue_covered):
-                self.rq.scenequeue_covered.add(tid)
-
-        found = True
-        while found:
-            found = False
-            for tid in self.rqdata.runtaskentries:
-                if tid in self.rq.scenequeue_covered:
-                    continue
-                logger.debug(1, 'Considering %s: %s' % (tid, str(self.rqdata.runtaskentries[tid].revdeps)))
-
-                if len(self.rqdata.runtaskentries[tid].revdeps) > 0 and self.rqdata.runtaskentries[tid].revdeps.issubset(self.rq.scenequeue_covered):
-                    if tid in self.rq.scenequeue_notcovered:
-                        continue
-                    found = True
-                    self.rq.scenequeue_covered.add(tid)
-
-        logger.debug(1, 'Skip list (pre setsceneverify) %s', sorted(self.rq.scenequeue_covered))
-
-        # Allow the metadata to elect for setscene tasks to run anyway
-        covered_remove = set()
-        if self.rq.setsceneverify:
-            invalidtasks = []
-            tasknames = {}
-            fns = {}
-            for tid in self.rqdata.runtaskentries:
-                (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
-                taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
-                fns[tid] = taskfn
-                tasknames[tid] = taskname
-                if 'noexec' in taskdep and taskname in taskdep['noexec']:
-                    continue
-                if self.rq.check_stamp_task(tid, taskname + "_setscene", cache=self.stampcache):
-                    logger.debug(2, 'Setscene stamp current for task %s', tid)
-                    continue
-                if self.rq.check_stamp_task(tid, taskname, recurse = True, cache=self.stampcache):
-                    logger.debug(2, 'Normal stamp current for task %s', tid)
-                    continue
-                invalidtasks.append(tid)
-
-            call = self.rq.setsceneverify + "(covered, tasknames, fns, d, invalidtasks=invalidtasks)"
-            locs = { "covered" : self.rq.scenequeue_covered, "tasknames" : tasknames, "fns" : fns, "d" : self.cooker.data, "invalidtasks" : invalidtasks }
-            covered_remove = bb.utils.better_eval(call, locs)
-
-        def removecoveredtask(tid):
-            (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
-            taskname = taskname + '_setscene'
-            bb.build.del_stamp(taskname, self.rqdata.dataCaches[mc], taskfn)
-            self.rq.scenequeue_covered.remove(tid)
-
-        toremove = covered_remove | self.rq.scenequeue_notcovered
-        for task in toremove:
-            logger.debug(1, 'Not skipping task %s due to setsceneverify', task)
-        while toremove:
-            covered_remove = []
-            for task in toremove:
-                if task in self.rq.scenequeue_covered:
-                    removecoveredtask(task)
-                for deptask in self.rqdata.runtaskentries[task].depends:
-                    if deptask not in self.rq.scenequeue_covered:
-                        continue
-                    if deptask in toremove or deptask in covered_remove or deptask in initial_covered:
-                        continue
-                    logger.debug(1, 'Task %s depends on task %s so not skipping' % (task, deptask))
-                    covered_remove.append(deptask)
-            toremove = covered_remove
-
-        logger.debug(1, 'Full skip list %s', self.rq.scenequeue_covered)
-
-
-        for mc in self.rqdata.dataCaches:
-            target_pairs = []
-            for tid in self.rqdata.target_tids:
-                (tidmc, fn, taskname, _) = split_tid_mcfn(tid)
-                if tidmc == mc:
-                    target_pairs.append((fn, taskname))
-
-            event.fire(bb.event.StampUpdate(target_pairs, self.rqdata.dataCaches[mc].stamp), self.cfgData)
-
-        schedulers = self.get_schedulers()
-        for scheduler in schedulers:
-            if self.scheduler == scheduler.name:
-                self.sched = scheduler(self, self.rqdata)
-                logger.debug(1, "Using runqueue scheduler '%s'", scheduler.name)
-                break
-        else:
-            bb.fatal("Invalid scheduler '%s'.  Available schedulers: %s" %
-                     (self.scheduler, ", ".join(obj.name for obj in schedulers)))
-
     def get_schedulers(self):
         schedulers = set(obj for obj in globals().values()
                              if type(obj) is type and
@@ -1964,65 +1920,137 @@
 
     def execute(self):
         """
-        Run the tasks in a queue prepared by rqdata.prepare()
+        Run the tasks in a queue prepared by prepare_runqueue
         """
 
-        if self.rqdata.setscenewhitelist is not None and not self.rqdata.setscenewhitelist_checked:
-            self.rqdata.setscenewhitelist_checked = True
-
-            # Check tasks that are going to run against the whitelist
-            def check_norun_task(tid, showerror=False):
-                (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
-                # Ignore covered tasks
-                if tid in self.rq.scenequeue_covered:
-                    return False
-                # Ignore stamped tasks
-                if self.rq.check_stamp_task(tid, taskname, cache=self.stampcache):
-                    return False
-                # Ignore noexec tasks
-                taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
-                if 'noexec' in taskdep and taskname in taskdep['noexec']:
-                    return False
-
-                pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
-                if not check_setscene_enforce_whitelist(pn, taskname, self.rqdata.setscenewhitelist):
-                    if showerror:
-                        if tid in self.rqdata.runq_setscene_tids:
-                            logger.error('Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname))
-                        else:
-                            logger.error('Task %s.%s attempted to execute unexpectedly' % (pn, taskname))
-                    return True
-                return False
-            # Look to see if any tasks that we think shouldn't run are going to
-            unexpected = False
-            for tid in self.rqdata.runtaskentries:
-                if check_norun_task(tid):
-                    unexpected = True
-                    break
-            if unexpected:
-                # Run through the tasks in the rough order they'd have executed and print errors
-                # (since the order can be useful - usually missing sstate for the last few tasks
-                # is the cause of the problem)
-                task = self.sched.next()
-                while task is not None:
-                    check_norun_task(task, showerror=True)
-                    self.task_skip(task, 'Setscene enforcement check')
-                    task = self.sched.next()
-
-                self.rq.state = runQueueCleanUp
-                return True
-
         self.rq.read_workers()
 
-        if self.stats.total == 0:
-            # nothing to do
-            self.rq.state = runQueueCleanUp
+        task = None
+        if not self.sqdone and self.can_start_task():
+            # Find the next setscene to run
+            for nexttask in self.rqdata.runq_setscene_tids:
+                if nexttask in self.sq_buildable and nexttask not in self.sq_running and self.sqdata.stamps[nexttask] not in self.build_stamps.values():
+                    if nexttask not in self.sqdata.unskippable and len(self.sqdata.sq_revdeps[nexttask]) > 0 and self.sqdata.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sqdata.sq_revdeps[nexttask]):
+                        if nexttask not in self.rqdata.target_tids:
+                            logger.debug(2, "Skipping setscene for task %s" % nexttask)
+                            self.sq_task_skip(nexttask)
+                            self.scenequeue_notneeded.add(nexttask)
+                            if nexttask in self.sq_deferred:
+                                del self.sq_deferred[nexttask]
+                            return True
+                    if nexttask in self.sq_deferred:
+                        if self.sq_deferred[nexttask] not in self.runq_complete:
+                            continue
+                        logger.debug(1, "Task %s no longer deferred" % nexttask)
+                        del self.sq_deferred[nexttask]
+                        valid = self.rq.validate_hashes(set([nexttask]), self.cooker.data, None, False)
+                        if not valid:
+                            logger.debug(1, "%s didn't become valid, skipping setscene" % nexttask)
+                            self.sq_task_failoutright(nexttask)
+                            return True
+                        else:
+                            self.sqdata.outrightfail.remove(nexttask)
+                    if nexttask in self.sqdata.outrightfail:
+                        logger.debug(2, 'No package found, so skipping setscene task %s', nexttask)
+                        self.sq_task_failoutright(nexttask)
+                        return True
+                    if nexttask in self.sqdata.unskippable:
+                        logger.debug(2, "Setscene task %s is unskippable" % nexttask)
+                    task = nexttask
+                    break
+        if task is not None:
+            (mc, fn, taskname, taskfn) = split_tid_mcfn(task)
+            taskname = taskname + "_setscene"
+            if self.rq.check_stamp_task(task, taskname_from_tid(task), recurse = True, cache=self.stampcache):
+                logger.debug(2, 'Stamp for underlying task %s is current, so skipping setscene variant', task)
+                self.sq_task_failoutright(task)
+                return True
 
-        task = self.sched.next()
+            if self.cooker.configuration.force:
+                if task in self.rqdata.target_tids:
+                    self.sq_task_failoutright(task)
+                    return True
+
+            if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
+                logger.debug(2, 'Setscene stamp current task %s, so skip it and its dependencies', task)
+                self.sq_task_skip(task)
+                return True
+
+            if self.cooker.configuration.skipsetscene:
+                logger.debug(2, 'No setscene tasks should be executed. Skipping %s', task)
+                self.sq_task_failoutright(task)
+                return True
+
+            startevent = sceneQueueTaskStarted(task, self.sq_stats, self.rq)
+            bb.event.fire(startevent, self.cfgData)
+
+            taskdepdata = self.sq_build_taskdepdata(task)
+
+            taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
+            taskhash = self.rqdata.get_task_hash(task)
+            unihash = self.rqdata.get_task_unihash(task)
+            if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
+                if not mc in self.rq.fakeworker:
+                    self.rq.start_fakeworker(self, mc)
+                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collection.get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
+                self.rq.fakeworker[mc].process.stdin.flush()
+            else:
+                self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collection.get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
+                self.rq.worker[mc].process.stdin.flush()
+
+            self.build_stamps[task] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
+            self.build_stamps2.append(self.build_stamps[task])
+            self.sq_running.add(task)
+            self.sq_live.add(task)
+            self.sq_stats.taskActive()
+            if self.can_start_task():
+                return True
+
+        if not self.sq_live and not self.sqdone and not self.sq_deferred:
+            logger.info("Setscene tasks completed")
+            logger.debug(1, 'We could skip tasks %s', "\n".join(sorted(self.scenequeue_covered)))
+
+            completeevent = sceneQueueComplete(self.sq_stats, self.rq)
+            bb.event.fire(completeevent, self.cfgData)
+
+            err = False
+            for x in self.rqdata.runtaskentries:
+                if x not in self.tasks_covered and x not in self.tasks_notcovered:
+                    logger.error("Task %s was never moved from the setscene queue" % x)
+                    err = True
+                if x not in self.tasks_scenequeue_done:
+                    logger.error("Task %s was never processed by the setscene code" % x)
+                    err = True
+                if len(self.rqdata.runtaskentries[x].depends) == 0 and x not in self.runq_buildable:
+                    logger.error("Task %s was never marked as buildable by the setscene code" % x)
+                    err = True
+            if err:
+                self.rq.state = runQueueFailed
+                return True
+
+            if self.cooker.configuration.setsceneonly:
+                self.rq.state = runQueueComplete
+                return True
+            self.sqdone = True
+
+            if self.stats.total == 0:
+                # nothing to do
+                self.rq.state = runQueueComplete
+                return True
+
+        if self.cooker.configuration.setsceneonly:
+            task = None
+        else:
+            task = self.sched.next()
         if task is not None:
             (mc, fn, taskname, taskfn) = split_tid_mcfn(task)
 
-            if task in self.rq.scenequeue_covered:
+            if self.rqdata.setscenewhitelist is not None:
+                if self.check_setscenewhitelist(task):
+                    self.task_fail(task, "setscene whitelist")
+                    return True
+
+            if task in self.tasks_covered:
                 logger.debug(2, "Setscene covered task %s", task)
                 self.task_skip(task, "covered")
                 return True
@@ -2075,10 +2103,17 @@
             if self.can_start_task():
                 return True
 
-        if self.stats.active > 0:
+        if self.stats.active > 0 or self.sq_stats.active > 0:
             self.rq.read_workers()
             return self.rq.active_fds()
 
+        # No more tasks can be run. If we have deferred setscene tasks we should run them.
+        if self.sq_deferred:
+            tid = self.sq_deferred.pop(list(self.sq_deferred.keys())[0])
+            logger.warning("Runqeueue deadlocked on deferred tasks, forcing task %s" % tid)
+            self.sq_task_failoutright(tid)
+            return True
+
         if len(self.failed_tids) != 0:
             self.rq.state = runQueueFailed
             return True
@@ -2087,9 +2122,9 @@
         for task in self.rqdata.runtaskentries:
             if task not in self.runq_buildable:
                 logger.error("Task %s never buildable!", task)
-            if task not in self.runq_running:
+            elif task not in self.runq_running:
                 logger.error("Task %s never ran!", task)
-            if task not in self.runq_complete:
+            elif task not in self.runq_complete:
                 logger.error("Task %s never completed!", task)
         self.rq.state = runQueueComplete
 
@@ -2131,270 +2166,100 @@
         #bb.note("Task %s: " % task + str(taskdepdata).replace("], ", "],\n"))
         return taskdepdata
 
-class RunQueueExecuteScenequeue(RunQueueExecute):
-    def __init__(self, rq):
-        RunQueueExecute.__init__(self, rq)
-
-        self.scenequeue_covered = set()
-        self.scenequeue_notcovered = set()
-        self.scenequeue_notneeded = set()
-
-        # If we don't have any setscene functions, skip this step
-        if len(self.rqdata.runq_setscene_tids) == 0:
-            rq.scenequeue_covered = set()
-            rq.scenequeue_notcovered = set()
-            rq.state = runQueueRunInit
-            return
-
-        self.stats = RunQueueStats(len(self.rqdata.runq_setscene_tids))
-
-        sq_revdeps = {}
-        sq_revdeps_new = {}
-        sq_revdeps_squash = {}
-        self.sq_harddeps = {}
-        self.stamps = {}
-
-        # We need to construct a dependency graph for the setscene functions. Intermediate
-        # dependencies between the setscene tasks only complicate the code. This code
-        # therefore aims to collapse the huge runqueue dependency tree into a smaller one
-        # only containing the setscene functions.
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        # First process the chains up to the first setscene task.
-        endpoints = {}
-        for tid in self.rqdata.runtaskentries:
-            sq_revdeps[tid] = copy.copy(self.rqdata.runtaskentries[tid].revdeps)
-            sq_revdeps_new[tid] = set()
-            if (len(sq_revdeps[tid]) == 0) and tid not in self.rqdata.runq_setscene_tids:
-                #bb.warn("Added endpoint %s" % (tid))
-                endpoints[tid] = set()
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        # Secondly process the chains between setscene tasks.
-        for tid in self.rqdata.runq_setscene_tids:
-            #bb.warn("Added endpoint 2 %s" % (tid))
-            for dep in self.rqdata.runtaskentries[tid].depends:
-                    if tid in sq_revdeps[dep]:
-                        sq_revdeps[dep].remove(tid)
-                    if dep not in endpoints:
-                        endpoints[dep] = set()
-                    #bb.warn("  Added endpoint 3 %s" % (dep))
-                    endpoints[dep].add(tid)
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        def process_endpoints(endpoints):
-            newendpoints = {}
-            for point, task in endpoints.items():
-                tasks = set()
-                if task:
-                    tasks |= task
-                if sq_revdeps_new[point]:
-                    tasks |= sq_revdeps_new[point]
-                sq_revdeps_new[point] = set()
-                if point in self.rqdata.runq_setscene_tids:
-                    sq_revdeps_new[point] = tasks
-                    tasks = set()
-                    continue
-                for dep in self.rqdata.runtaskentries[point].depends:
-                    if point in sq_revdeps[dep]:
-                        sq_revdeps[dep].remove(point)
-                    if tasks:
-                        sq_revdeps_new[dep] |= tasks
-                    if len(sq_revdeps[dep]) == 0 and dep not in self.rqdata.runq_setscene_tids:
-                        newendpoints[dep] = task
-            if len(newendpoints) != 0:
-                process_endpoints(newendpoints)
-
-        process_endpoints(endpoints)
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        # Build a list of setscene tasks which are "unskippable"
-        # These are direct endpoints referenced by the build
-        endpoints2 = {}
-        sq_revdeps2 = {}
-        sq_revdeps_new2 = {}
-        def process_endpoints2(endpoints):
-            newendpoints = {}
-            for point, task in endpoints.items():
-                tasks = set([point])
-                if task:
-                    tasks |= task
-                if sq_revdeps_new2[point]:
-                    tasks |= sq_revdeps_new2[point]
-                sq_revdeps_new2[point] = set()
-                if point in self.rqdata.runq_setscene_tids:
-                    sq_revdeps_new2[point] = tasks
-                for dep in self.rqdata.runtaskentries[point].depends:
-                    if point in sq_revdeps2[dep]:
-                        sq_revdeps2[dep].remove(point)
-                    if tasks:
-                        sq_revdeps_new2[dep] |= tasks
-                    if (len(sq_revdeps2[dep]) == 0 or len(sq_revdeps_new2[dep]) != 0) and dep not in self.rqdata.runq_setscene_tids:
-                        newendpoints[dep] = tasks
-            if len(newendpoints) != 0:
-                process_endpoints2(newendpoints)
-        for tid in self.rqdata.runtaskentries:
-            sq_revdeps2[tid] = copy.copy(self.rqdata.runtaskentries[tid].revdeps)
-            sq_revdeps_new2[tid] = set()
-            if (len(sq_revdeps2[tid]) == 0) and tid not in self.rqdata.runq_setscene_tids:
-                endpoints2[tid] = set()
-        process_endpoints2(endpoints2)
-        self.unskippable = []
-        for tid in self.rqdata.runq_setscene_tids:
-            if sq_revdeps_new2[tid]:
-                self.unskippable.append(tid)
-
-        self.rqdata.init_progress_reporter.next_stage(len(self.rqdata.runtaskentries))
-
-        for taskcounter, tid in enumerate(self.rqdata.runtaskentries):
-            if tid in self.rqdata.runq_setscene_tids:
-                deps = set()
-                for dep in sq_revdeps_new[tid]:
-                    deps.add(dep)
-                sq_revdeps_squash[tid] = deps
-            elif len(sq_revdeps_new[tid]) != 0:
-                bb.msg.fatal("RunQueue", "Something went badly wrong during scenequeue generation, aborting. Please report this problem.")
-            self.rqdata.init_progress_reporter.update(taskcounter)
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        # Resolve setscene inter-task dependencies
-        # e.g. do_sometask_setscene[depends] = "targetname:do_someothertask_setscene"
-        # Note that anything explicitly depended upon will have its reverse dependencies removed to avoid circular dependencies
-        for tid in self.rqdata.runq_setscene_tids:
-                (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
-                realtid = tid + "_setscene"
-                idepends = self.rqdata.taskData[mc].taskentries[realtid].idepends
-                self.stamps[tid] = bb.build.stampfile(taskname + "_setscene", self.rqdata.dataCaches[mc], taskfn, noextra=True)
-                for (depname, idependtask) in idepends:
-
-                    if depname not in self.rqdata.taskData[mc].build_targets:
+    def scenequeue_process_notcovered(self, task):
+        if len(self.rqdata.runtaskentries[task].depends) == 0:
+            self.setbuildable(task)
+        notcovered = set([task])
+        while notcovered:
+            new = set()
+            for t in notcovered:
+                for deptask in self.rqdata.runtaskentries[t].depends:
+                    if deptask in notcovered or deptask in new or deptask in self.rqdata.runq_setscene_tids or deptask in self.tasks_notcovered:
                         continue
+                    logger.debug(1, 'Task %s depends on non-setscene task %s so not skipping' % (t, deptask))
+                    new.add(deptask)
+                    self.tasks_notcovered.add(deptask)
+                    if len(self.rqdata.runtaskentries[deptask].depends) == 0:
+                        self.setbuildable(deptask)
+            notcovered = new
 
-                    depfn = self.rqdata.taskData[mc].build_targets[depname][0]
-                    if depfn is None:
-                         continue
-                    deptid = depfn + ":" + idependtask.replace("_setscene", "")
-                    if deptid not in self.rqdata.runtaskentries:
-                        bb.msg.fatal("RunQueue", "Task %s depends upon non-existent task %s:%s" % (realtid, depfn, idependtask))
+    def scenequeue_process_unskippable(self, task):
+        # Look up the dependency chain for non-setscene things which depend on this task
+        # and mark as 'done'/notcovered
+        ready = set([task])
+        while ready:
+            new = set()
+            for t in ready:
+                for deptask in self.rqdata.runtaskentries[t].revdeps:
+                    if deptask in ready or deptask in new or deptask in self.tasks_scenequeue_done or deptask in self.rqdata.runq_setscene_tids:
+                        continue
+                    if self.rqdata.runtaskentries[deptask].depends.issubset(self.tasks_scenequeue_done):
+                        new.add(deptask)
+                        self.tasks_scenequeue_done.add(deptask)
+                        self.tasks_notcovered.add(deptask)
+                        #logger.warning("Up: " + str(deptask))
+            ready = new
 
-                    if not deptid in self.sq_harddeps:
-                        self.sq_harddeps[deptid] = set()
-                    self.sq_harddeps[deptid].add(tid)
-
-                    sq_revdeps_squash[tid].add(deptid)
-                    # Have to zero this to avoid circular dependencies
-                    sq_revdeps_squash[deptid] = set()
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        for task in self.sq_harddeps:
-             for dep in self.sq_harddeps[task]:
-                 sq_revdeps_squash[dep].add(task)
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        #for tid in sq_revdeps_squash:
-        #    for dep in sq_revdeps_squash[tid]:
-        #        data = data + "\n   %s" % dep
-        #    bb.warn("Task %s_setscene: is %s " % (tid, data
-
-        self.sq_deps = {}
-        self.sq_revdeps = sq_revdeps_squash
-        self.sq_revdeps2 = copy.deepcopy(self.sq_revdeps)
-
-        for tid in self.sq_revdeps:
-            self.sq_deps[tid] = set()
-        for tid in self.sq_revdeps:
-            for dep in self.sq_revdeps[tid]:
-                self.sq_deps[dep].add(tid)
-
-        self.rqdata.init_progress_reporter.next_stage()
-
-        for tid in self.sq_revdeps:
-            if len(self.sq_revdeps[tid]) == 0:
-                self.runq_buildable.add(tid)
-
-        self.rqdata.init_progress_reporter.finish()
-
-        self.outrightfail = []
-        if self.rq.hashvalidate:
-            sq_hash = []
-            sq_hashfn = []
-            sq_unihash = []
-            sq_fn = []
-            sq_taskname = []
-            sq_task = []
-            noexec = []
-            stamppresent = []
-            for tid in self.sq_revdeps:
-                (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
-
-                taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
-
-                if 'noexec' in taskdep and taskname in taskdep['noexec']:
-                    noexec.append(tid)
-                    self.task_skip(tid)
-                    bb.build.make_stamp(taskname + "_setscene", self.rqdata.dataCaches[mc], taskfn)
-                    continue
-
-                if self.rq.check_stamp_task(tid, taskname + "_setscene", cache=self.stampcache):
-                    logger.debug(2, 'Setscene stamp current for task %s', tid)
-                    stamppresent.append(tid)
-                    self.task_skip(tid)
-                    continue
-
-                if self.rq.check_stamp_task(tid, taskname, recurse = True, cache=self.stampcache):
-                    logger.debug(2, 'Normal stamp current for task %s', tid)
-                    stamppresent.append(tid)
-                    self.task_skip(tid)
-                    continue
-
-                sq_fn.append(fn)
-                sq_hashfn.append(self.rqdata.dataCaches[mc].hashfn[taskfn])
-                sq_hash.append(self.rqdata.runtaskentries[tid].hash)
-                sq_unihash.append(self.rqdata.runtaskentries[tid].unihash)
-                sq_taskname.append(taskname)
-                sq_task.append(tid)
-
-            self.cooker.data.setVar("BB_SETSCENE_STAMPCURRENT_COUNT", len(stamppresent))
-
-            valid = self.rq.validate_hash(sq_fn=sq_fn, sq_task=sq_taskname, sq_hash=sq_hash, sq_hashfn=sq_hashfn,
-                    siginfo=False, sq_unihash=sq_unihash, d=self.cooker.data)
-
-            self.cooker.data.delVar("BB_SETSCENE_STAMPCURRENT_COUNT")
-
-            valid_new = stamppresent
-            for v in valid:
-                valid_new.append(sq_task[v])
-
-            for tid in self.sq_revdeps:
-                if tid not in valid_new and tid not in noexec:
-                    logger.debug(2, 'No package found, so skipping setscene task %s', tid)
-                    self.outrightfail.append(tid)
-
-        logger.info('Executing SetScene Tasks')
-
-        self.rq.state = runQueueSceneRun
-
-    def scenequeue_updatecounters(self, task, fail = False):
-        for dep in self.sq_deps[task]:
-            if fail and task in self.sq_harddeps and dep in self.sq_harddeps[task]:
+    def scenequeue_updatecounters(self, task, fail=False):
+        for dep in self.sqdata.sq_deps[task]:
+            if fail and task in self.sqdata.sq_harddeps and dep in self.sqdata.sq_harddeps[task]:
                 logger.debug(2, "%s was unavailable and is a hard dependency of %s so skipping" % (task, dep))
-                self.scenequeue_updatecounters(dep, fail)
+                self.sq_task_failoutright(dep)
                 continue
-            if task not in self.sq_revdeps2[dep]:
+            if task not in self.sqdata.sq_revdeps2[dep]:
                 # May already have been removed by the fail case above
                 continue
-            self.sq_revdeps2[dep].remove(task)
-            if len(self.sq_revdeps2[dep]) == 0:
-                self.runq_buildable.add(dep)
+            self.sqdata.sq_revdeps2[dep].remove(task)
+            if len(self.sqdata.sq_revdeps2[dep]) == 0:
+                self.sq_buildable.add(dep)
 
-    def task_completeoutright(self, task):
+        next = set([task])
+        while next:
+            new = set()
+            for t in next:
+                self.tasks_scenequeue_done.add(t)
+                # Look down the dependency chain for non-setscene things which this task depends on
+                # and mark as 'done'
+                for dep in self.rqdata.runtaskentries[t].depends:
+                    if dep in self.rqdata.runq_setscene_tids or dep in self.tasks_scenequeue_done:
+                        continue
+                    if self.rqdata.runtaskentries[dep].revdeps.issubset(self.tasks_scenequeue_done):
+                        new.add(dep)
+                        #logger.warning(" Down: " + dep)
+            next = new
+
+        if task in self.sqdata.unskippable:
+            self.scenequeue_process_unskippable(task)
+
+        if task in self.scenequeue_notcovered:
+            logger.debug(1, 'Not skipping setscene task %s', task)
+            self.scenequeue_process_notcovered(task)
+        elif task in self.scenequeue_covered:
+            logger.debug(1, 'Queued setscene task %s', task)
+            self.coveredtopocess.add(task)
+
+        for task in self.coveredtopocess.copy():
+            if self.sqdata.sq_covered_tasks[task].issubset(self.tasks_scenequeue_done):
+                logger.debug(1, 'Processing setscene task %s', task)
+                covered = self.sqdata.sq_covered_tasks[task]
+                covered.add(task)
+
+                # If a task is in target_tids and isn't a setscene task, we can't skip it.
+                cantskip = covered.intersection(self.rqdata.target_tids).difference(self.rqdata.runq_setscene_tids)
+                for tid in cantskip:
+                    self.tasks_notcovered.add(tid)
+                    self.scenequeue_process_notcovered(tid)
+                covered.difference_update(cantskip)
+
+                # Remove notcovered tasks
+                covered.difference_update(self.tasks_notcovered)
+                self.tasks_covered.update(covered)
+                self.coveredtopocess.remove(task)
+                for tid in covered:
+                    if len(self.rqdata.runtaskentries[tid].depends) == 0:
+                        self.setbuildable(tid)
+
+    def sq_task_completeoutright(self, task):
         """
         Mark a task as completed
         Look at the reverse dependencies and mark any task with
@@ -2403,9 +2268,10 @@
 
         logger.debug(1, 'Found task %s which could be accelerated', task)
         self.scenequeue_covered.add(task)
+        self.tasks_covered.add(task)
         self.scenequeue_updatecounters(task)
 
-    def check_taskfail(self, task):
+    def sq_check_taskfail(self, task):
         if self.rqdata.setscenewhitelist is not None:
             realtask = task.split('_setscene')[0]
             (mc, fn, taskname, taskfn) = split_tid_mcfn(realtask)
@@ -2414,132 +2280,36 @@
                 logger.error('Task %s.%s failed' % (pn, taskname + "_setscene"))
                 self.rq.state = runQueueCleanUp
 
-    def task_complete(self, task):
-        self.stats.taskCompleted()
-        bb.event.fire(sceneQueueTaskCompleted(task, self.stats, self.rq), self.cfgData)
-        self.task_completeoutright(task)
+    def sq_task_complete(self, task):
+        self.sq_stats.taskCompleted()
+        bb.event.fire(sceneQueueTaskCompleted(task, self.sq_stats, self.rq), self.cfgData)
+        self.sq_task_completeoutright(task)
 
-    def task_fail(self, task, result):
-        self.stats.taskFailed()
-        bb.event.fire(sceneQueueTaskFailed(task, self.stats, result, self), self.cfgData)
+    def sq_task_fail(self, task, result):
+        self.sq_stats.taskFailed()
+        bb.event.fire(sceneQueueTaskFailed(task, self.sq_stats, result, self), self.cfgData)
         self.scenequeue_notcovered.add(task)
+        self.tasks_notcovered.add(task)
         self.scenequeue_updatecounters(task, True)
-        self.check_taskfail(task)
+        self.sq_check_taskfail(task)
 
-    def task_failoutright(self, task):
-        self.runq_running.add(task)
-        self.runq_buildable.add(task)
-        self.stats.taskSkipped()
-        self.stats.taskCompleted()
+    def sq_task_failoutright(self, task):
+        self.sq_running.add(task)
+        self.sq_buildable.add(task)
+        self.sq_stats.taskSkipped()
+        self.sq_stats.taskCompleted()
         self.scenequeue_notcovered.add(task)
+        self.tasks_notcovered.add(task)
         self.scenequeue_updatecounters(task, True)
 
-    def task_skip(self, task):
-        self.runq_running.add(task)
-        self.runq_buildable.add(task)
-        self.task_completeoutright(task)
-        self.stats.taskSkipped()
-        self.stats.taskCompleted()
+    def sq_task_skip(self, task):
+        self.sq_running.add(task)
+        self.sq_buildable.add(task)
+        self.sq_task_completeoutright(task)
+        self.sq_stats.taskSkipped()
+        self.sq_stats.taskCompleted()
 
-    def execute(self):
-        """
-        Run the tasks in a queue prepared by prepare_runqueue
-        """
-
-        self.rq.read_workers()
-
-        task = None
-        if self.can_start_task():
-            # Find the next setscene to run
-            for nexttask in self.rqdata.runq_setscene_tids:
-                if nexttask in self.runq_buildable and nexttask not in self.runq_running and self.stamps[nexttask] not in self.build_stamps.values():
-                    if nexttask in self.unskippable:
-                        logger.debug(2, "Setscene task %s is unskippable" % nexttask)
-                    if nexttask not in self.unskippable and len(self.sq_revdeps[nexttask]) > 0 and self.sq_revdeps[nexttask].issubset(self.scenequeue_covered) and self.check_dependencies(nexttask, self.sq_revdeps[nexttask], True):
-                        fn = fn_from_tid(nexttask)
-                        foundtarget = False
-
-                        if nexttask in self.rqdata.target_tids:
-                            foundtarget = True
-                        if not foundtarget:
-                            logger.debug(2, "Skipping setscene for task %s" % nexttask)
-                            self.task_skip(nexttask)
-                            self.scenequeue_notneeded.add(nexttask)
-                            return True
-                    if nexttask in self.outrightfail:
-                        self.task_failoutright(nexttask)
-                        return True
-                    task = nexttask
-                    break
-        if task is not None:
-            (mc, fn, taskname, taskfn) = split_tid_mcfn(task)
-            taskname = taskname + "_setscene"
-            if self.rq.check_stamp_task(task, taskname_from_tid(task), recurse = True, cache=self.stampcache):
-                logger.debug(2, 'Stamp for underlying task %s is current, so skipping setscene variant', task)
-                self.task_failoutright(task)
-                return True
-
-            if self.cooker.configuration.force:
-                if task in self.rqdata.target_tids:
-                    self.task_failoutright(task)
-                    return True
-
-            if self.rq.check_stamp_task(task, taskname, cache=self.stampcache):
-                logger.debug(2, 'Setscene stamp current task %s, so skip it and its dependencies', task)
-                self.task_skip(task)
-                return True
-
-            startevent = sceneQueueTaskStarted(task, self.stats, self.rq)
-            bb.event.fire(startevent, self.cfgData)
-
-            taskdepdata = self.build_taskdepdata(task)
-
-            taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
-            taskhash = self.rqdata.get_task_hash(task)
-            unihash = self.rqdata.get_task_unihash(task)
-            if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
-                if not mc in self.rq.fakeworker:
-                    self.rq.start_fakeworker(self, mc)
-                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collection.get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
-                self.rq.fakeworker[mc].process.stdin.flush()
-            else:
-                self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collection.get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
-                self.rq.worker[mc].process.stdin.flush()
-
-            self.build_stamps[task] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
-            self.build_stamps2.append(self.build_stamps[task])
-            self.runq_running.add(task)
-            self.stats.taskActive()
-            if self.can_start_task():
-                return True
-
-        if self.stats.active > 0:
-            self.rq.read_workers()
-            return self.rq.active_fds()
-
-        #for tid in self.sq_revdeps:
-        #    if tid not in self.runq_running:
-        #        buildable = tid in self.runq_buildable
-        #        revdeps = self.sq_revdeps[tid]
-        #        bb.warn("Found we didn't run %s %s %s" % (tid, buildable, str(revdeps)))
-
-        self.rq.scenequeue_covered = self.scenequeue_covered
-        self.rq.scenequeue_notcovered = self.scenequeue_notcovered
-
-        logger.debug(1, 'We can skip tasks %s', "\n".join(sorted(self.rq.scenequeue_covered)))
-
-        self.rq.state = runQueueRunInit
-
-        completeevent = sceneQueueComplete(self.stats, self.rq)
-        bb.event.fire(completeevent, self.cfgData)
-
-        return True
-
-    def runqueue_process_waitpid(self, task, status):
-        RunQueueExecute.runqueue_process_waitpid(self, task, status)
-
-
-    def build_taskdepdata(self, task):
+    def sq_build_taskdepdata(self, task):
         def getsetscenedeps(tid):
             deps = set()
             (mc, fn, taskname, _) = split_tid_mcfn(tid)
@@ -2577,6 +2347,268 @@
         #bb.note("Task %s: " % task + str(taskdepdata).replace("], ", "],\n"))
         return taskdepdata
 
+    def check_setscenewhitelist(self, tid):
+        # Check task that is going to run against the whitelist
+        (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+        # Ignore covered tasks
+        if tid in self.tasks_covered:
+            return False
+        # Ignore stamped tasks
+        if self.rq.check_stamp_task(tid, taskname, cache=self.stampcache):
+            return False
+        # Ignore noexec tasks
+        taskdep = self.rqdata.dataCaches[mc].task_deps[taskfn]
+        if 'noexec' in taskdep and taskname in taskdep['noexec']:
+            return False
+
+        pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
+        if not check_setscene_enforce_whitelist(pn, taskname, self.rqdata.setscenewhitelist):
+            if tid in self.rqdata.runq_setscene_tids:
+                msg = 'Task %s.%s attempted to execute unexpectedly and should have been setscened' % (pn, taskname)
+            else:
+                msg = 'Task %s.%s attempted to execute unexpectedly' % (pn, taskname)
+            logger.error(msg + '\nThis is usually due to missing setscene tasks. Those missing in this build were: %s' % pprint.pformat(self.scenequeue_notcovered))
+            return True
+        return False
+
+class SQData(object):
+    def __init__(self):
+        # SceneQueue dependencies
+        self.sq_deps = {}
+        # SceneQueue reverse dependencies
+        self.sq_revdeps = {}
+        # Copy of reverse dependencies used by sq processing code
+        self.sq_revdeps2 = {}
+        # Injected inter-setscene task dependencies
+        self.sq_harddeps = {}
+        # Cache of stamp files so duplicates can't run in parallel
+        self.stamps = {}
+        # Setscene tasks directly depended upon by the build
+        self.unskippable = set()
+        # List of setscene tasks which aren't present
+        self.outrightfail = set()
+        # A list of normal tasks a setscene task covers
+        self.sq_covered_tasks = {}
+
+def build_scenequeue_data(sqdata, rqdata, rq, cooker, stampcache, sqrq):
+
+    sq_revdeps = {}
+    sq_revdeps_squash = {}
+    sq_collated_deps = {}
+
+    # We need to construct a dependency graph for the setscene functions. Intermediate
+    # dependencies between the setscene tasks only complicate the code. This code
+    # therefore aims to collapse the huge runqueue dependency tree into a smaller one
+    # only containing the setscene functions.
+
+    rqdata.init_progress_reporter.next_stage()
+
+    # First process the chains up to the first setscene task.
+    endpoints = {}
+    for tid in rqdata.runtaskentries:
+        sq_revdeps[tid] = copy.copy(rqdata.runtaskentries[tid].revdeps)
+        sq_revdeps_squash[tid] = set()
+        if (len(sq_revdeps[tid]) == 0) and tid not in rqdata.runq_setscene_tids:
+            #bb.warn("Added endpoint %s" % (tid))
+            endpoints[tid] = set()
+
+    rqdata.init_progress_reporter.next_stage()
+
+    # Secondly process the chains between setscene tasks.
+    for tid in rqdata.runq_setscene_tids:
+        sq_collated_deps[tid] = set()
+        #bb.warn("Added endpoint 2 %s" % (tid))
+        for dep in rqdata.runtaskentries[tid].depends:
+                if tid in sq_revdeps[dep]:
+                    sq_revdeps[dep].remove(tid)
+                if dep not in endpoints:
+                    endpoints[dep] = set()
+                #bb.warn("  Added endpoint 3 %s" % (dep))
+                endpoints[dep].add(tid)
+
+    rqdata.init_progress_reporter.next_stage()
+
+    def process_endpoints(endpoints):
+        newendpoints = {}
+        for point, task in endpoints.items():
+            tasks = set()
+            if task:
+                tasks |= task
+            if sq_revdeps_squash[point]:
+                tasks |= sq_revdeps_squash[point]
+            if point not in rqdata.runq_setscene_tids:
+                for t in tasks:
+                    sq_collated_deps[t].add(point)
+            sq_revdeps_squash[point] = set()
+            if point in rqdata.runq_setscene_tids:
+                sq_revdeps_squash[point] = tasks
+                tasks = set()
+                continue
+            for dep in rqdata.runtaskentries[point].depends:
+                if point in sq_revdeps[dep]:
+                    sq_revdeps[dep].remove(point)
+                if tasks:
+                    sq_revdeps_squash[dep] |= tasks
+                if len(sq_revdeps[dep]) == 0 and dep not in rqdata.runq_setscene_tids:
+                    newendpoints[dep] = task
+        if len(newendpoints) != 0:
+            process_endpoints(newendpoints)
+
+    process_endpoints(endpoints)
+
+    rqdata.init_progress_reporter.next_stage()
+
+    # Build a list of setscene tasks which are "unskippable"
+    # These are direct endpoints referenced by the build
+    # Take the build endpoints (no revdeps) and find the sstate tasks they depend upon
+    new = True
+    for tid in rqdata.runtaskentries:
+        if len(rqdata.runtaskentries[tid].revdeps) == 0:
+            sqdata.unskippable.add(tid)
+    while new:
+        new = False
+        for tid in sqdata.unskippable.copy():
+            if tid in rqdata.runq_setscene_tids:
+                continue
+            sqdata.unskippable.remove(tid)
+            if len(rqdata.runtaskentries[tid].depends) == 0:
+                # These are tasks which have no setscene tasks in their chain, need to mark as directly buildable
+                sqrq.tasks_notcovered.add(tid)
+                sqrq.tasks_scenequeue_done.add(tid)
+                sqrq.setbuildable(tid)
+                sqrq.scenequeue_process_unskippable(tid)
+            sqdata.unskippable |= rqdata.runtaskentries[tid].depends
+            new = True
+
+    rqdata.init_progress_reporter.next_stage(len(rqdata.runtaskentries))
+
+    # Sanity check all dependencies could be changed to setscene task references
+    for taskcounter, tid in enumerate(rqdata.runtaskentries):
+        if tid in rqdata.runq_setscene_tids:
+            pass
+        elif len(sq_revdeps_squash[tid]) != 0:
+            bb.msg.fatal("RunQueue", "Something went badly wrong during scenequeue generation, aborting. Please report this problem.")
+        else:
+            del sq_revdeps_squash[tid]
+        rqdata.init_progress_reporter.update(taskcounter)
+
+    rqdata.init_progress_reporter.next_stage()
+
+    # Resolve setscene inter-task dependencies
+    # e.g. do_sometask_setscene[depends] = "targetname:do_someothertask_setscene"
+    # Note that anything explicitly depended upon will have its reverse dependencies removed to avoid circular dependencies
+    for tid in rqdata.runq_setscene_tids:
+        (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+        realtid = tid + "_setscene"
+        idepends = rqdata.taskData[mc].taskentries[realtid].idepends
+        sqdata.stamps[tid] = bb.build.stampfile(taskname + "_setscene", rqdata.dataCaches[mc], taskfn, noextra=True)
+        for (depname, idependtask) in idepends:
+
+            if depname not in rqdata.taskData[mc].build_targets:
+                continue
+
+            depfn = rqdata.taskData[mc].build_targets[depname][0]
+            if depfn is None:
+                continue
+            deptid = depfn + ":" + idependtask.replace("_setscene", "")
+            if deptid not in rqdata.runtaskentries:
+                bb.msg.fatal("RunQueue", "Task %s depends upon non-existent task %s:%s" % (realtid, depfn, idependtask))
+
+            if not deptid in sqdata.sq_harddeps:
+                sqdata.sq_harddeps[deptid] = set()
+            sqdata.sq_harddeps[deptid].add(tid)
+
+            sq_revdeps_squash[tid].add(deptid)
+            # Have to zero this to avoid circular dependencies
+            sq_revdeps_squash[deptid] = set()
+
+    rqdata.init_progress_reporter.next_stage()
+
+    for task in sqdata.sq_harddeps:
+        for dep in sqdata.sq_harddeps[task]:
+            sq_revdeps_squash[dep].add(task)
+
+    rqdata.init_progress_reporter.next_stage()
+
+    #for tid in sq_revdeps_squash:
+    #    data = ""
+    #    for dep in sq_revdeps_squash[tid]:
+    #        data = data + "\n   %s" % dep
+    #    bb.warn("Task %s_setscene: is %s " % (tid, data))
+
+    sqdata.sq_revdeps = sq_revdeps_squash
+    sqdata.sq_revdeps2 = copy.deepcopy(sqdata.sq_revdeps)
+    sqdata.sq_covered_tasks = sq_collated_deps
+
+    # Build reverse version of revdeps to populate deps structure
+    for tid in sqdata.sq_revdeps:
+        sqdata.sq_deps[tid] = set()
+    for tid in sqdata.sq_revdeps:
+        for dep in sqdata.sq_revdeps[tid]:
+            sqdata.sq_deps[dep].add(tid)
+
+    rqdata.init_progress_reporter.next_stage()
+
+    multiconfigs = set()
+    for tid in sqdata.sq_revdeps:
+        multiconfigs.add(mc_from_tid(tid))
+        if len(sqdata.sq_revdeps[tid]) == 0:
+            sqrq.sq_buildable.add(tid)
+
+    rqdata.init_progress_reporter.finish()
+
+    if rq.hashvalidate:
+        noexec = []
+        stamppresent = []
+        tocheck = set()
+
+        for tid in sqdata.sq_revdeps:
+            (mc, fn, taskname, taskfn) = split_tid_mcfn(tid)
+
+            taskdep = rqdata.dataCaches[mc].task_deps[taskfn]
+
+            if 'noexec' in taskdep and taskname in taskdep['noexec']:
+                noexec.append(tid)
+                sqrq.sq_task_skip(tid)
+                bb.build.make_stamp(taskname + "_setscene", rqdata.dataCaches[mc], taskfn)
+                continue
+
+            if rq.check_stamp_task(tid, taskname + "_setscene", cache=stampcache):
+                logger.debug(2, 'Setscene stamp current for task %s', tid)
+                stamppresent.append(tid)
+                sqrq.sq_task_skip(tid)
+                continue
+
+            if rq.check_stamp_task(tid, taskname, recurse = True, cache=stampcache):
+                logger.debug(2, 'Normal stamp current for task %s', tid)
+                stamppresent.append(tid)
+                sqrq.sq_task_skip(tid)
+                continue
+
+            tocheck.add(tid)
+
+        valid = rq.validate_hashes(tocheck, cooker.data, len(stamppresent), False)
+
+        valid_new = stamppresent
+        for v in valid:
+            valid_new.append(v)
+
+        hashes = {}
+        for mc in sorted(multiconfigs):
+          for tid in sqdata.sq_revdeps:
+            if mc_from_tid(tid) != mc:
+                continue
+            if tid not in valid_new and tid not in noexec and tid not in sqrq.scenequeue_notcovered:
+                sqdata.outrightfail.add(tid)
+
+                h = pending_hash_index(tid, rqdata)
+                if h not in hashes:
+                    hashes[h] = tid
+                else:
+                    sqrq.sq_deferred[tid] = hashes[h]
+                    bb.warn("Deferring %s after %s" % (tid, hashes[h]))
+
+
 class TaskFailure(Exception):
     """
     Exception raised when a task in a runqueue fails
diff --git a/poky/bitbake/lib/bb/siggen.py b/poky/bitbake/lib/bb/siggen.py
index fe580e4..6a729f3 100644
--- a/poky/bitbake/lib/bb/siggen.py
+++ b/poky/bitbake/lib/bb/siggen.py
@@ -49,7 +49,9 @@
         return self.taskhash[task]
 
     def get_taskhash(self, fn, task, deps, dataCache):
-        return "0"
+        k = fn + "." + task
+        self.taskhash[k] = hashlib.sha256(k.encode("utf-8")).hexdigest()
+        return self.taskhash[k]
 
     def writeout_file_checksum_cache(self):
         """Write/update the file checksum cache onto disk"""
@@ -643,9 +645,9 @@
     a_taint = a_data.get('taint', None)
     b_taint = b_data.get('taint', None)
     if a_taint != b_taint:
-        if a_taint.startswith('nostamp:'):
+        if a_taint and a_taint.startswith('nostamp:'):
             a_taint = a_taint.replace('nostamp:', 'nostamp(uuid4):')
-        if b_taint.startswith('nostamp:'):
+        if b_taint and b_taint.startswith('nostamp:'):
             b_taint = b_taint.replace('nostamp:', 'nostamp(uuid4):')
         output.append(color_format("{color_title}Taint (by forced/invalidated task) changed{color_default} from %s to %s") % (a_taint, b_taint))
 
diff --git a/poky/bitbake/lib/bb/taskdata.py b/poky/bitbake/lib/bb/taskdata.py
index d13bd7c..8c25e09 100644
--- a/poky/bitbake/lib/bb/taskdata.py
+++ b/poky/bitbake/lib/bb/taskdata.py
@@ -1,4 +1,3 @@
-#!/usr/bin/env python
 """
 BitBake 'TaskData' implementation
 
diff --git a/poky/bitbake/lib/bb/tests/event.py b/poky/bitbake/lib/bb/tests/event.py
index b6e40c6..9229b63 100644
--- a/poky/bitbake/lib/bb/tests/event.py
+++ b/poky/bitbake/lib/bb/tests/event.py
@@ -561,14 +561,6 @@
         self.assertEqual(event.fn(1), callback(1))
         self.assertEqual(event.pid, EventClassesTest._worker_pid)
 
-    def test_StampUpdate(self):
-        targets = ["foo", "bar"]
-        stampfns = [lambda:"foobar"]
-        event = bb.event.StampUpdate(targets, stampfns)
-        self.assertEqual(event.targets, targets)
-        self.assertEqual(event.stampPrefix, stampfns)
-        self.assertEqual(event.pid, EventClassesTest._worker_pid)
-
     def test_BuildBase(self):
         """ Test base class for bitbake build events """
         name = "foo"
diff --git a/poky/bitbake/lib/bb/tests/fetch.py b/poky/bitbake/lib/bb/tests/fetch.py
index 16f975b..23c6338 100644
--- a/poky/bitbake/lib/bb/tests/fetch.py
+++ b/poky/bitbake/lib/bb/tests/fetch.py
@@ -899,6 +899,7 @@
         if os.path.exists(os.path.join(repo_path, 'bitbake-gitsm-test1')):
             self.assertTrue(os.path.exists(os.path.join(repo_path, 'bitbake-gitsm-test1', 'bitbake')), msg='submodule of submodule missing')
 
+    @skipIfNoNetwork()
     def test_git_submodule_dbus_broker(self):
         # The following external repositories have show failures in fetch and unpack operations
         # We want to avoid regressions!
@@ -916,6 +917,7 @@
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/subprojects/c-sundry/config')), msg='Missing submodule config "subprojects/c-sundry"')
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/subprojects/c-utf8/config')), msg='Missing submodule config "subprojects/c-utf8"')
 
+    @skipIfNoNetwork()
     def test_git_submodule_CLI11(self):
         url = "gitsm://github.com/CLIUtils/CLI11;protocol=git;rev=bd4dc911847d0cde7a6b41dfa626a85aab213baf"
         fetcher = bb.fetch.Fetch([url], self.d)
@@ -929,6 +931,7 @@
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/extern/json/config')), msg='Missing submodule config "extern/json"')
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/extern/sanitizers/config')), msg='Missing submodule config "extern/sanitizers"')
 
+    @skipIfNoNetwork()
     def test_git_submodule_update_CLI11(self):
         """ Prevent regression on update detection not finding missing submodule, or modules without needed commits """
         url = "gitsm://github.com/CLIUtils/CLI11;protocol=git;rev=cf6a99fa69aaefe477cc52e3ef4a7d2d7fa40714"
@@ -948,6 +951,7 @@
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/extern/json/config')), msg='Missing submodule config "extern/json"')
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/extern/sanitizers/config')), msg='Missing submodule config "extern/sanitizers"')
 
+    @skipIfNoNetwork()
     def test_git_submodule_aktualizr(self):
         url = "gitsm://github.com/advancedtelematic/aktualizr;branch=master;protocol=git;rev=d00d1a04cc2366d1a5f143b84b9f507f8bd32c44"
         fetcher = bb.fetch.Fetch([url], self.d)
@@ -964,6 +968,7 @@
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/third_party/googletest/config')), msg='Missing submodule config "third_party/googletest/config"')
         self.assertTrue(os.path.exists(os.path.join(repo_path, '.git/modules/third_party/HdrHistogram_c/config')), msg='Missing submodule config "third_party/HdrHistogram_c/config"')
 
+    @skipIfNoNetwork()
     def test_git_submodule_iotedge(self):
         """ Prevent regression on deeply nested submodules not being checked out properly, even though they were fetched. """
 
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass
new file mode 100644
index 0000000..5b87e20
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/base.bbclass
@@ -0,0 +1,242 @@
+SLOWTASKS ??= ""
+SSTATEVALID ??= ""
+
+def stamptask(d):
+    import time
+
+    thistask = d.expand("${PN}:${BB_CURRENTTASK}")
+    with open(d.expand("${TOPDIR}/%s.run") % thistask, "a+") as f:
+        f.write("\n")
+
+    if d.getVar("BB_CURRENT_MC") != "default":
+        thistask = d.expand("${BB_CURRENT_MC}:${PN}:${BB_CURRENTTASK}")
+    if thistask in d.getVar("SLOWTASKS").split():
+        bb.note("Slowing task %s" % thistask)
+        time.sleep(0.5)
+
+    with open(d.expand("${TOPDIR}/task.log"), "a+") as f:
+        f.write(thistask + "\n")
+
+python do_fetch() {
+    # fetch
+    stamptask(d)
+}
+python do_unpack() {
+    # unpack
+    stamptask(d)
+}
+python do_patch() {
+    # patch
+    stamptask(d)
+}
+python do_populate_lic() {
+    # populate_lic
+    stamptask(d)
+}
+python do_prepare_recipe_sysroot() {
+    # prepare_recipe_sysroot
+    stamptask(d)
+}
+python do_configure() {
+    # configure
+    stamptask(d)
+}
+python do_compile() {
+    # compile
+    stamptask(d)
+}
+python do_install() {
+    # install
+    stamptask(d)
+}
+python do_populate_sysroot() {
+    # populate_sysroot
+    stamptask(d)
+}
+python do_package() {
+    # package
+    stamptask(d)
+}
+python do_package_write_ipk() {
+    # package_write_ipk
+    stamptask(d)
+}
+python do_package_write_rpm() {
+    # package_write_rpm
+    stamptask(d)
+}
+python do_packagedata() {
+    # packagedata
+    stamptask(d)
+}
+python do_package_qa() {
+    # package_qa
+    stamptask(d)
+}
+python do_build() {
+    # build
+    stamptask(d)
+}
+do_prepare_recipe_sysroot[deptask] = "do_populate_sysroot"
+do_package[deptask] += "do_packagedata"
+do_build[recrdeptask] += "do_deploy"
+do_build[recrdeptask] += "do_package_write_ipk"
+do_build[recrdeptask] += "do_package_write_rpm"
+do_package_qa[rdeptask] = "do_packagedata"
+do_populate_lic_deploy[recrdeptask] += "do_populate_lic do_deploy"
+
+DEBIANRDEP = "do_packagedata"
+oo_package_write_ipk[rdeptask] = "${DEBIANRDEP}"
+do_package_write_rpm[rdeptask] = "${DEBIANRDEP}"
+
+addtask fetch
+addtask unpack after do_fetch
+addtask patch after do_unpack
+addtask prepare_recipe_sysroot after do_patch
+addtask configure after do_prepare_recipe_sysroot
+addtask compile after do_configure
+addtask install after do_compile
+addtask populate_sysroot after do_install
+addtask package after do_install
+addtask package_write_ipk after do_packagedata do_package
+addtask package_write_rpm after do_packagedata do_package
+addtask packagedata after do_package
+addtask package_qa after do_package
+addtask build after do_package_qa do_package_write_rpm do_package_write_ipk do_populate_sysroot
+
+python do_package_setscene() {
+    stamptask(d)
+}
+python do_package_qa_setscene() {
+    stamptask(d)
+}
+python do_package_write_ipk_setscene() {
+    stamptask(d)
+}
+python do_package_write_rpm_setscene() {
+    stamptask(d)
+}
+python do_packagedata_setscene() {
+    stamptask(d)
+}
+python do_populate_lic_setscene() {
+    stamptask(d)
+}
+python do_populate_sysroot_setscene() {
+    stamptask(d)
+}
+
+addtask package_setscene
+addtask package_qa_setscene
+addtask package_write_ipk_setscene
+addtask package_write_rpm_setscene
+addtask packagedata_setscene
+addtask populate_lic_setscene
+addtask populate_sysroot_setscene
+
+BB_SETSCENE_DEPVALID = "setscene_depvalid"
+
+def setscene_depvalid(task, taskdependees, notneeded, d, log=None):
+    # taskdependees is a dict of tasks which depend on task, each being a 3 item list of [PN, TASKNAME, FILENAME]
+    # task is included in taskdependees too
+    # Return - False - We need this dependency
+    #        - True - We can skip this dependency
+    import re
+
+    def logit(msg, log):
+        if log is not None:
+            log.append(msg)
+        else:
+            bb.debug(2, msg)
+
+    logit("Considering setscene task: %s" % (str(taskdependees[task])), log)
+
+    def isNativeCross(x):
+        return x.endswith("-native") or "-cross-" in x or "-crosssdk" in x or x.endswith("-cross")
+
+    # We only need to trigger populate_lic through direct dependencies
+    if taskdependees[task][1] == "do_populate_lic":
+        return True
+
+    # We only need to trigger packagedata through direct dependencies
+    # but need to preserve packagedata on packagedata links
+    if taskdependees[task][1] == "do_packagedata":
+        for dep in taskdependees:
+            if taskdependees[dep][1] == "do_packagedata":
+                return False
+        return True
+
+    for dep in taskdependees:
+        logit("  considering dependency: %s" % (str(taskdependees[dep])), log)
+        if task == dep:
+            continue
+        if dep in notneeded:
+            continue
+        # do_package_write_* and do_package doesn't need do_package
+        if taskdependees[task][1] == "do_package" and taskdependees[dep][1] in ['do_package', 'do_package_write_ipk', 'do_package_write_rpm', 'do_packagedata', 'do_package_qa']:
+            continue
+        # do_package_write_* need do_populate_sysroot as they're mainly postinstall dependencies
+        if taskdependees[task][1] == "do_populate_sysroot" and taskdependees[dep][1] in ['do_package_write_ipk', 'do_package_write_rpm']:
+            return False
+        # do_package/packagedata/package_qa don't need do_populate_sysroot
+        if taskdependees[task][1] == "do_populate_sysroot" and taskdependees[dep][1] in ['do_package', 'do_packagedata', 'do_package_qa']:
+            continue
+        # Native/Cross packages don't exist and are noexec anyway
+        if isNativeCross(taskdependees[dep][0]) and taskdependees[dep][1] in ['do_package_write_ipk', 'do_package_write_rpm', 'do_packagedata', 'do_package', 'do_package_qa']:
+            continue
+
+        # This is due to the [depends] in useradd.bbclass complicating matters
+        # The logic *is* reversed here due to the way hard setscene dependencies are injected
+        if (taskdependees[task][1] == 'do_package' or taskdependees[task][1] == 'do_populate_sysroot') and taskdependees[dep][0].endswith(('shadow-native', 'shadow-sysroot', 'base-passwd', 'pseudo-native')) and taskdependees[dep][1] == 'do_populate_sysroot':
+            continue
+
+        # Consider sysroot depending on sysroot tasks
+        if taskdependees[task][1] == 'do_populate_sysroot' and taskdependees[dep][1] == 'do_populate_sysroot':
+            # Native/Cross populate_sysroot need their dependencies
+            if isNativeCross(taskdependees[task][0]) and isNativeCross(taskdependees[dep][0]):
+                return False
+            # Target populate_sysroot depended on by cross tools need to be installed
+            if isNativeCross(taskdependees[dep][0]):
+                return False
+            # Native/cross tools depended upon by target sysroot are not needed
+            # Add an exception for shadow-native as required by useradd.bbclass
+            if isNativeCross(taskdependees[task][0]) and taskdependees[task][0] != 'shadow-native':
+                continue
+            # Target populate_sysroot need their dependencies
+            return False
+
+
+        if taskdependees[dep][1] == "do_populate_lic":
+            continue
+
+        # Safe fallthrough default
+        logit(" Default setscene dependency fall through due to dependency: %s" % (str(taskdependees[dep])), log)
+        return False
+    return True
+
+BB_HASHCHECK_FUNCTION = "sstate_checkhashes"
+
+def sstate_checkhashes(sq_fn, sq_task, sq_hash, sq_hashfn, d, siginfo=False, *, sq_unihash=None):
+
+    ret = []
+    missed = []
+
+    valid = d.getVar("SSTATEVALID").split()
+
+    for task in range(len(sq_fn)):
+        n = os.path.basename(sq_fn[task]).rsplit(".", 1)[0] + ":" + sq_task[task]
+        if n in valid:
+            bb.note("SState: Found valid sstate for %s" % n)
+            ret.append(task)
+        elif os.path.exists(d.expand("${TOPDIR}/%s.run" % n.replace("do_", ""))):
+            bb.note("SState: Found valid sstate for %s (already run)" % n)
+            ret.append(task)
+        else:
+            missed.append(task)
+            bb.note("SState: Found no valid sstate for %s" % n)
+
+    if hasattr(bb.parse.siggen, "checkhashes"):
+        bb.parse.siggen.checkhashes(missed, ret, sq_fn, sq_task, sq_hash, sq_hashfn, d)
+
+    return ret
+
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/classes/image.bbclass b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/image.bbclass
new file mode 100644
index 0000000..da9ff11
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/image.bbclass
@@ -0,0 +1,5 @@
+do_rootfs[recrdeptask] += "do_package_write_deb do_package_qa"
+do_rootfs[recrdeptask] += "do_package_write_ipk do_package_qa"
+do_rootfs[recrdeptask] += "do_package_write_rpm do_package_qa
+do_rootfs[recrdeptask] += "do_packagedata"
+do_rootfs[recrdeptask] += "do_populate_lic"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/classes/native.bbclass b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/native.bbclass
new file mode 100644
index 0000000..7eaaee5
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/classes/native.bbclass
@@ -0,0 +1,2 @@
+RECIPERDEPTASK = "do_populate_sysroot"
+do_populate_sysroot[rdeptask] = "${RECIPERDEPTASK}"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
new file mode 100644
index 0000000..96ee1cd
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
@@ -0,0 +1,16 @@
+CACHE = "${TOPDIR}/cache"
+THISDIR = "${@os.path.dirname(d.getVar('FILE'))}"
+COREBASE := "${@os.path.normpath(os.path.dirname(d.getVar('FILE')+'/../../'))}"
+BBFILES = "${COREBASE}/recipes/*.bb"
+PROVIDES = "${PN}"
+PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0]}"
+PF = "${BB_CURRENT_MC}:${PN}"
+export PATH
+TMPDIR ??= "${TOPDIR}"
+STAMP = "${TMPDIR}/stamps/${PN}"
+T = "${TMPDIR}/workdir/${PN}/temp"
+BB_NUMBER_THREADS = "4"
+
+BB_HASHBASE_WHITELIST = "BB_CURRENT_MC"
+
+include conf/multiconfig/${BB_CURRENT_MC}.conf
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf
new file mode 100644
index 0000000..ecf23e1
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf
@@ -0,0 +1 @@
+TMPDIR = "${TOPDIR}/mc1/"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf
new file mode 100644
index 0000000..eef338e
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf
@@ -0,0 +1 @@
+TMPDIR = "${TOPDIR}/mc2/"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/a1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/a1.bb
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/a1.bb
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/b1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/b1.bb
new file mode 100644
index 0000000..c0b288e
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/b1.bb
@@ -0,0 +1 @@
+DEPENDS = "a1"
\ No newline at end of file
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/c1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/c1.bb
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/c1.bb
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/d1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/d1.bb
new file mode 100644
index 0000000..5ba1975
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/d1.bb
@@ -0,0 +1,3 @@
+DEPENDS = "a1"
+
+do_package_setscene[depends] = "a1:do_populate_sysroot_setscene"
diff --git a/poky/bitbake/lib/bb/tests/runqueue.py b/poky/bitbake/lib/bb/tests/runqueue.py
new file mode 100644
index 0000000..f22ad4b
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue.py
@@ -0,0 +1,228 @@
+#
+# BitBake Tests for runqueue task processing
+#
+# Copyright (C) 2019 Richard Purdie
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import unittest
+import bb
+import os
+import tempfile
+import subprocess
+
+#
+# TODO:
+# Add tests on task ordering (X happens before Y after Z)
+#
+
+class RunQueueTests(unittest.TestCase):
+
+    alltasks = ['package', 'fetch', 'unpack', 'patch', 'prepare_recipe_sysroot', 'configure',
+                'compile', 'install', 'packagedata', 'package_qa', 'package_write_rpm', 'package_write_ipk',
+                'populate_sysroot', 'build']
+    a1_sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_package_write_rpm a1:do_populate_lic a1:do_populate_sysroot"
+    b1_sstatevalid = "b1:do_package b1:do_package_qa b1:do_packagedata b1:do_package_write_ipk b1:do_package_write_rpm b1:do_populate_lic b1:do_populate_sysroot"
+
+    def run_bitbakecmd(self, cmd, builddir, sstatevalid="", slowtasks="", extraenv=None):
+        env = os.environ.copy()
+        env["BBPATH"] = os.path.realpath(os.path.join(os.path.dirname(__file__), "runqueue-tests"))
+        env["BB_ENV_EXTRAWHITE"] = "SSTATEVALID SLOWTASKS"
+        env["SSTATEVALID"] = sstatevalid
+        env["SLOWTASKS"] = slowtasks
+        if extraenv:
+            for k in extraenv:
+                env[k] = extraenv[k]
+                env["BB_ENV_EXTRAWHITE"] = env["BB_ENV_EXTRAWHITE"] + " " + k
+        try:
+            output = subprocess.check_output(cmd, env=env, stderr=subprocess.STDOUT,universal_newlines=True, cwd=builddir)
+        except subprocess.CalledProcessError as e:
+            self.fail("Command %s failed with %s" % (cmd, e.output))
+        tasks = []
+        with open(builddir + "/task.log", "r") as f:
+            tasks = [line.rstrip() for line in f]
+        return tasks
+
+    def test_no_setscenevalid(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            sstatevalid = ""
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:' + x for x in self.alltasks]
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_single_setscenevalid(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            sstatevalid = "a1:do_package"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_setscene', 'a1:fetch', 'a1:unpack', 'a1:patch', 'a1:prepare_recipe_sysroot', 'a1:configure',
+                        'a1:compile', 'a1:install', 'a1:packagedata', 'a1:package_qa', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot', 'a1:build']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_intermediate_setscenevalid(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            sstatevalid = "a1:do_package a1:do_populate_sysroot"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_setscene', 'a1:packagedata', 'a1:package_qa', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot_setscene', 'a1:build']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_intermediate_notcovered(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            sstatevalid = "a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_package_write_rpm a1:do_populate_lic a1:do_populate_sysroot"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_write_ipk_setscene', 'a1:package_write_rpm_setscene', 'a1:packagedata_setscene',
+                        'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_all_setscenevalid(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            sstatevalid = self.a1_sstatevalid
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_write_ipk_setscene', 'a1:package_write_rpm_setscene', 'a1:packagedata_setscene',
+                        'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_no_settasks(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1", "-c", "patch"]
+            sstatevalid = self.a1_sstatevalid
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:fetch', 'a1:unpack', 'a1:patch']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_mix_covered_notcovered(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1:do_patch", "a1:do_populate_sysroot"]
+            sstatevalid = self.a1_sstatevalid
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:fetch', 'a1:unpack', 'a1:patch', 'a1:populate_sysroot_setscene']
+            self.assertEqual(set(tasks), set(expected))
+
+
+    # Test targets with intermediate setscene tasks alongside a target with no intermediate setscene tasks
+    def test_mixed_direct_tasks_setscene_tasks(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "c1:do_patch", "a1"]
+            sstatevalid = self.a1_sstatevalid
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['c1:fetch', 'c1:unpack', 'c1:patch', 'a1:package_write_ipk_setscene', 'a1:package_write_rpm_setscene', 'a1:packagedata_setscene',
+                        'a1:package_qa_setscene', 'a1:build', 'a1:populate_sysroot_setscene']
+            self.assertEqual(set(tasks), set(expected))
+
+    # This test slows down the execution of do_package_setscene until after other real tasks have
+    # started running which tests for a bug where tasks were being lost from the buildable list of real
+    # tasks if they weren't in tasks_covered or tasks_notcovered
+    def test_slow_setscene(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            sstatevalid = "a1:do_package"
+            slowtasks = "a1:package_setscene"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, slowtasks)
+            expected = ['a1:package_setscene', 'a1:fetch', 'a1:unpack', 'a1:patch', 'a1:prepare_recipe_sysroot', 'a1:configure',
+                        'a1:compile', 'a1:install', 'a1:packagedata', 'a1:package_qa', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot', 'a1:build']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_setscenewhitelist(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "a1"]
+            extraenv = {
+                "BB_SETSCENE_ENFORCE" : "1",
+                "BB_SETSCENE_ENFORCE_WHITELIST" : "a1:do_package_write_rpm a1:do_build"
+            }
+            sstatevalid = "a1:do_package a1:do_package_qa a1:do_packagedata a1:do_package_write_ipk a1:do_populate_lic a1:do_populate_sysroot"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv)
+            expected = ['a1:packagedata_setscene', 'a1:package_qa_setscene', 'a1:package_write_ipk_setscene',
+                        'a1:populate_sysroot_setscene', 'a1:package_setscene']
+            self.assertEqual(set(tasks), set(expected))
+
+    # Tests for problems with dependencies between setscene tasks
+    def test_no_setscenevalid_harddeps(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "d1"]
+            sstatevalid = ""
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package', 'a1:fetch', 'a1:unpack', 'a1:patch', 'a1:prepare_recipe_sysroot', 'a1:configure',
+                        'a1:compile', 'a1:install', 'a1:packagedata', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot', 'd1:package', 'd1:fetch', 'd1:unpack', 'd1:patch', 'd1:prepare_recipe_sysroot', 'd1:configure',
+                        'd1:compile', 'd1:install', 'd1:packagedata', 'd1:package_qa', 'd1:package_write_rpm', 'd1:package_write_ipk',
+                        'd1:populate_sysroot', 'd1:build']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_no_setscenevalid_withdeps(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "b1"]
+            sstatevalid = ""
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks]
+            expected.remove('a1:build')
+            expected.remove('a1:package_qa')
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_single_a1_setscenevalid_withdeps(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "b1"]
+            sstatevalid = "a1:do_package"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_setscene', 'a1:fetch', 'a1:unpack', 'a1:patch', 'a1:prepare_recipe_sysroot', 'a1:configure',
+                        'a1:compile', 'a1:install', 'a1:packagedata', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot'] + ['b1:' + x for x in self.alltasks]
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_single_b1_setscenevalid_withdeps(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "b1"]
+            sstatevalid = "b1:do_package"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package', 'a1:fetch', 'a1:unpack', 'a1:patch', 'a1:prepare_recipe_sysroot', 'a1:configure',
+                        'a1:compile', 'a1:install', 'a1:packagedata', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot', 'b1:package_setscene'] + ['b1:' + x for x in self.alltasks]
+            expected.remove('b1:package')
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_intermediate_setscenevalid_withdeps(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "b1"]
+            sstatevalid = "a1:do_package a1:do_populate_sysroot b1:do_package"
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_setscene', 'a1:packagedata', 'a1:package_write_rpm', 'a1:package_write_ipk',
+                        'a1:populate_sysroot_setscene', 'b1:package_setscene'] + ['b1:' + x for x in self.alltasks]
+            expected.remove('b1:package')
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_all_setscenevalid_withdeps(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            cmd = ["bitbake", "b1"]
+            sstatevalid = self.a1_sstatevalid + " " + self.b1_sstatevalid
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid)
+            expected = ['a1:package_write_ipk_setscene', 'a1:package_write_rpm_setscene', 'a1:packagedata_setscene',
+                        'b1:build', 'a1:populate_sysroot_setscene', 'b1:package_write_ipk_setscene', 'b1:package_write_rpm_setscene',
+                        'b1:packagedata_setscene', 'b1:package_qa_setscene', 'b1:populate_sysroot_setscene']
+            self.assertEqual(set(tasks), set(expected))
+
+    def test_multiconfig_setscene_optimise(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            extraenv = {
+                "BBMULTICONFIG" : "mc1 mc2",
+                "BB_SIGNATURE_HANDLER" : "basic"
+            }
+            cmd = ["bitbake", "b1", "mc:mc1:b1", "mc:mc2:b1"]
+            setscenetasks = ['package_write_ipk_setscene', 'package_write_rpm_setscene', 'packagedata_setscene',
+                             'populate_sysroot_setscene', 'package_qa_setscene']
+            sstatevalid = ""
+            tasks = self.run_bitbakecmd(cmd, tempdir, sstatevalid, extraenv=extraenv)
+            expected = ['a1:' + x for x in self.alltasks] + ['b1:' + x for x in self.alltasks] + \
+                       ['mc1:b1:' + x for x in setscenetasks] + ['mc1:a1:' + x for x in setscenetasks] + \
+                       ['mc2:b1:' + x for x in setscenetasks] + ['mc2:a1:' + x for x in setscenetasks] + \
+                       ['mc1:b1:build', 'mc2:b1:build']
+            for x in ['mc1:a1:package_qa_setscene', 'mc2:a1:package_qa_setscene', 'a1:build', 'a1:package_qa']:
+                expected.remove(x)
+            self.assertEqual(set(tasks), set(expected))
+
diff --git a/poky/bitbake/lib/bb/ui/knotty.py b/poky/bitbake/lib/bb/ui/knotty.py
index 88f638f..1c72aa2 100644
--- a/poky/bitbake/lib/bb/ui/knotty.py
+++ b/poky/bitbake/lib/bb/ui/knotty.py
@@ -660,7 +660,6 @@
             # ignore
             if isinstance(event, (bb.event.BuildBase,
                                   bb.event.MetadataEvent,
-                                  bb.event.StampUpdate,
                                   bb.event.ConfigParsed,
                                   bb.event.MultiConfigParsed,
                                   bb.event.RecipeParsed,
diff --git a/poky/bitbake/lib/bb/ui/uihelper.py b/poky/bitbake/lib/bb/ui/uihelper.py
index db7f0ca..c8dd7df 100644
--- a/poky/bitbake/lib/bb/ui/uihelper.py
+++ b/poky/bitbake/lib/bb/ui/uihelper.py
@@ -40,7 +40,7 @@
             self.running_pids.remove(event.pid)
             self.failed_tasks.append( { 'title' : "%s %s" % (event._package, event._task)})
             self.needUpdate = True
-        elif isinstance(event, bb.runqueue.runQueueTaskStarted) or isinstance(event, bb.runqueue.sceneQueueTaskStarted):
+        elif isinstance(event, bb.runqueue.runQueueTaskStarted):
             self.tasknumber_current = event.stats.completed + event.stats.active + event.stats.failed + 1
             self.tasknumber_total = event.stats.total
             self.needUpdate = True
diff --git a/poky/bitbake/lib/pyinotify.py b/poky/bitbake/lib/pyinotify.py
index 9dcfd0d..1528a22 100644
--- a/poky/bitbake/lib/pyinotify.py
+++ b/poky/bitbake/lib/pyinotify.py
@@ -1,5 +1,4 @@
-#!/usr/bin/env python
-
+#
 # pyinotify.py - python interface to inotify
 # Copyright (c) 2005-2015 Sebastien Martini <seb@dbzteam.org>
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/selenium_helpers.py b/poky/bitbake/lib/toaster/tests/browser/selenium_helpers.py
index 6d9bb80..02d4f4b 100644
--- a/poky/bitbake/lib/toaster/tests/browser/selenium_helpers.py
+++ b/poky/bitbake/lib/toaster/tests/browser/selenium_helpers.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/selenium_helpers_base.py b/poky/bitbake/lib/toaster/tests/browser/selenium_helpers_base.py
index 8417aa3..6c94684 100644
--- a/poky/bitbake/lib/toaster/tests/browser/selenium_helpers_base.py
+++ b/poky/bitbake/lib/toaster/tests/browser/selenium_helpers_base.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_all_builds_page.py b/poky/bitbake/lib/toaster/tests/browser/test_all_builds_page.py
index f402161..fba627b 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_all_builds_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_all_builds_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_all_projects_page.py b/poky/bitbake/lib/toaster/tests/browser/test_all_projects_page.py
index f86d19d..afd2d35 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_all_projects_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_all_projects_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page.py b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page.py
index 53c125e..d972aff 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_artifacts.py b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_artifacts.py
index c560d7d..e2623e8 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_artifacts.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_artifacts.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_recipes.py b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_recipes.py
index e4f3d68..c542d45 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_recipes.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_recipes.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_tasks.py b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_tasks.py
index bdb0c27..22acb47 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_tasks.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_builddashboard_page_tasks.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_js_unit_tests.py b/poky/bitbake/lib/toaster/tests/browser/test_js_unit_tests.py
index 63f3f4a..e8b4295 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_js_unit_tests.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_js_unit_tests.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_landing_page.py b/poky/bitbake/lib/toaster/tests/browser/test_landing_page.py
index 0a00fcc..0790198 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_landing_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_landing_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_layerdetails_page.py b/poky/bitbake/lib/toaster/tests/browser/test_layerdetails_page.py
index e34aa13..f81e696 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_layerdetails_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_layerdetails_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_most_recent_builds_states.py b/poky/bitbake/lib/toaster/tests/browser/test_most_recent_builds_states.py
index d52b184..15d25dc 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_most_recent_builds_states.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_most_recent_builds_states.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_new_custom_image_page.py b/poky/bitbake/lib/toaster/tests/browser/test_new_custom_image_page.py
index 3b47a49..0aa3b7a 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_new_custom_image_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_new_custom_image_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_new_project_page.py b/poky/bitbake/lib/toaster/tests/browser/test_new_project_page.py
index d250bd1..8e56bb0 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_new_project_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_new_project_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_project_builds_page.py b/poky/bitbake/lib/toaster/tests/browser/test_project_builds_page.py
index 065f3eb..47fb10b 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_project_builds_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_project_builds_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_project_config_page.py b/poky/bitbake/lib/toaster/tests/browser/test_project_config_page.py
index 48508df..2816eb9 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_project_config_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_project_config_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_project_page.py b/poky/bitbake/lib/toaster/tests/browser/test_project_page.py
index 5cb607d..8b5e1b6 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_project_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_project_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_sample.py b/poky/bitbake/lib/toaster/tests/browser/test_sample.py
index 008ba14..f4ad670 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_sample.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_sample.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_task_page.py b/poky/bitbake/lib/toaster/tests/browser/test_task_page.py
index 47c8c1a..26f3dca 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_task_page.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_task_page.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/browser/test_toastertable_ui.py b/poky/bitbake/lib/toaster/tests/browser/test_toastertable_ui.py
index b4f8344..ef78cbb 100644
--- a/poky/bitbake/lib/toaster/tests/browser/test_toastertable_ui.py
+++ b/poky/bitbake/lib/toaster/tests/browser/test_toastertable_ui.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/builds/buildtest.py b/poky/bitbake/lib/toaster/tests/builds/buildtest.py
index 9f40f97..872bbd3 100644
--- a/poky/bitbake/lib/toaster/tests/builds/buildtest.py
+++ b/poky/bitbake/lib/toaster/tests/builds/buildtest.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/builds/test_core_image_min.py b/poky/bitbake/lib/toaster/tests/builds/test_core_image_min.py
index 3d3aa2a..44b6cbe 100644
--- a/poky/bitbake/lib/toaster/tests/builds/test_core_image_min.py
+++ b/poky/bitbake/lib/toaster/tests/builds/test_core_image_min.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/commands/test_loaddata.py b/poky/bitbake/lib/toaster/tests/commands/test_loaddata.py
index b633d97..9e8d555 100644
--- a/poky/bitbake/lib/toaster/tests/commands/test_loaddata.py
+++ b/poky/bitbake/lib/toaster/tests/commands/test_loaddata.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/commands/test_lsupdates.py b/poky/bitbake/lib/toaster/tests/commands/test_lsupdates.py
index 23a84a2..3c4fbe0 100644
--- a/poky/bitbake/lib/toaster/tests/commands/test_lsupdates.py
+++ b/poky/bitbake/lib/toaster/tests/commands/test_lsupdates.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/commands/test_runbuilds.py b/poky/bitbake/lib/toaster/tests/commands/test_runbuilds.py
index 29bc7c9..e223b95 100644
--- a/poky/bitbake/lib/toaster/tests/commands/test_runbuilds.py
+++ b/poky/bitbake/lib/toaster/tests/commands/test_runbuilds.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/eventreplay/__init__.py b/poky/bitbake/lib/toaster/tests/eventreplay/__init__.py
index 3606cba..8ed6792 100644
--- a/poky/bitbake/lib/toaster/tests/eventreplay/__init__.py
+++ b/poky/bitbake/lib/toaster/tests/eventreplay/__init__.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/functional/functional_helpers.py b/poky/bitbake/lib/toaster/tests/functional/functional_helpers.py
index 6a3f74b..455c408 100644
--- a/poky/bitbake/lib/toaster/tests/functional/functional_helpers.py
+++ b/poky/bitbake/lib/toaster/tests/functional/functional_helpers.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster functional tests implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/functional/test_functional_basic.py b/poky/bitbake/lib/toaster/tests/functional/test_functional_basic.py
index 2b3a288..56c84fb 100644
--- a/poky/bitbake/lib/toaster/tests/functional/test_functional_basic.py
+++ b/poky/bitbake/lib/toaster/tests/functional/test_functional_basic.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster functional tests implementation
 #
diff --git a/poky/bitbake/lib/toaster/tests/views/test_views.py b/poky/bitbake/lib/toaster/tests/views/test_views.py
index 477654e..68d9e9d 100644
--- a/poky/bitbake/lib/toaster/tests/views/test_views.py
+++ b/poky/bitbake/lib/toaster/tests/views/test_views.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python
+#! /usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #
diff --git a/poky/bitbake/lib/toaster/toastermain/management/commands/checksocket.py b/poky/bitbake/lib/toaster/toastermain/management/commands/checksocket.py
index c1758f3..811fd5d 100644
--- a/poky/bitbake/lib/toaster/toastermain/management/commands/checksocket.py
+++ b/poky/bitbake/lib/toaster/toastermain/management/commands/checksocket.py
@@ -1,4 +1,4 @@
-#!/usr/bin/env python
+#!/usr/bin/env python3
 #
 # BitBake Toaster Implementation
 #