poky: subtree update:b66b9f7548..26ae42ded7

Adrian Bunk (1):
      dpkg: Remove workaound patch for host tar < 1.27

Alexander Kanavin (39):
      linux-yocto: exclude from version checks/automated version updates
      pciutils: upgrade 3.6.4 -> 3.7.0
      createrepo-c: upgrade 0.15.10 -> 0.15.11
      librepo: upgrade 1.11.3 -> 1.12.0
      pkgconf: upgrade 1.6.3 -> 1.7.3
      python3-numpy: upgrade 1.18.4 -> 1.18.5
      python3-git: upgrade 3.1.2 -> 3.1.3
      strace: upgrade 5.6 -> 5.7
      acpica: upgrade 20200430 -> 20200528
      man-db: upgrade 2.9.1 -> 2.9.2
      msmtp: upgrade 1.8.10 -> 1.8.11
      epiphany: upgrade 3.36.1 -> 3.36.2
      cogl-1.0: upgrade 1.22.6 -> 1.22.8
      libdrm: upgrade 2.4.101 -> 2.4.102
      vulkan-demos: upgrade to latest revision
      xkeyboard-config: upgrade 2.29 -> 2.30
      linux-firmware: upgrade 20200421 -> 20200519
      babeltrace2: upgrade 2.0.2 -> 2.0.3
      lttng-tools: upgrade 2.12.0 -> 2.12.1
      ffmpeg: upgrade 4.2.2 -> 4.2.3
      wpebackend-fdo: upgrade 1.6.0 -> 1.6.1
      gnutls: upgrade 3.6.13 -> 3.6.14
      libcap: upgrade 2.34 -> 2.36
      bison: upgrade 3.6.2 -> 3.6.3
      asciidoc: 8.6.10 -> 9.0.0
      debianutils: 4.9.1 -> 4.11
      git: upgrade 2.26.2 -> 2.27.0
      go: 1.14.3 -> 1.14.4
      iproute2: upgrade 5.6.0 -> 5.7.0
      libksba: 1.3.5 -> 1.4.0
      lttng-modules: update to 2.12.1
      mpg123: update to 1.26.1
      ovmf: update to 202005
      shared-mime-info: upgrade 1.15 -> 2.0
      subversion: upgrade 1.13.0 -> 1.14.0
      xinetd: 2.3.15 -> 2.3.15.4
      init-system-helpers: use https for fetching
      ca-certificates: correct upstream version check
      build-sysroots: add sysroot paths with native binaries to PATH

Andreas Müller (4):
      vte: tiny cleanup / renumber patch
      vte: upgrade 0.60.2 -> 0.60.3
      harfbuzz: upgrade 2.6.4 -> 2.6.7
      sqlite3: upgrade 3.32.1 -> 3.32.2

Changqing Li (1):
      cups.inc: remove template service from SYSTEMD_SERVICE

Chen Qi (2):
      db: do not install db_verify if 'verify' is not enabled
      vim: restore the 'chmod -x' workaround in do_install

Hongxu Jia (1):
      glib-networking/btrfs-tools/dosfstools/parted/bmap-tools/libsoup-2.4: add nativesdk support

Jacob Kroon (4):
      features_check: Factorize code for checking features
      meta: Don't inherit 'features_check' in recipes that don't utilize it
      features_check: Warn if not used
      insane: Check for feature check variables not being used

Joe Slater (2):
      qemu: force build type to production
      vim: _FORTIFY_SOURCE=2 be gone

Joshua Watt (12):
      bitbake: bitbake: cooker: Split file collections per multiconfig
      bitbake: bitbake: cache: Use multiconfig aware caches
      bitbake: bitbake: lib: Add support for Logging Adapters
      bitbake: bitbake: lib: Add PrefixLoggerAdapter helper
      bitbake: bitbake: cache: Improve logging
      bitbake: bitbake: cache: Cache size optimization
      bitbake: bitbake: tests: Add tests for BBMASK in multiconfig
      bitbake: bitbake: command: Move split_mc_pn to runqueue
      bitbake: bitbake: cache: Fix error when cache is rebuilt
      wic: Fix --extra-space argument handling
      bitbake: bitbake: siggen: Pass all data caches to hash functions
      bitbake: bitbake: tests: Add mcdepends test

Kai Kang (4):
      mdadm: remove service template from SYSTEMD_SERVICE
      wpa-supplicant: remove service templates from SYSTEMD_SERVICE
      encodings: clear postinst script
      avahi-dnsconfd: rdepends on avahi-daemon

Khem Raj (2):
      libunwind: Fix build on aarch64/musl
      stress-ng: Fix build on musl

Lee Chee Yang (1):
      qemu: fix CVE-2020-13361

Ming Liu (1):
      u-boot: support merging .cfg files for UBOOT_CONFIG

Mingli Yu (2):
      python3-magic: add the missing rdepends
      python3-setuptools: add missing rdepends for python3-pkg-resources

Paul Barker (5):
      selftest: git-submodule-test: New recipe for testing a gitsm SRC_URI
      archiver: Capture git submodules in mirror archiver
      selftest-ed: Support native builds
      selftest-nopackages: New recipe in meta-selftest
      archiver: Speed up tests

Pierre-Jean Texier (2):
      libarchive: upgrade 3.4.2 -> 3.4.3
      iptables: upgrade 1.8.4 -> 1.8.5

Rasmus Villemoes (1):
      glibc: move ld.so.conf back to main package

Richard Purdie (1):
      Revert "bitbake.conf: Remove unused DEPLOY_DIR_TOOLS variable"

Stefan Agner (1):
      initramfs-framework: check successful mount using mountpoint

Signed-off-by: Andrew Geissler <geissonator@yahoo.com>
Change-Id: I047d0fa664dcc2864fd7c1a09d124e3d8c197e9f
diff --git a/poky/bitbake/lib/bb/__init__.py b/poky/bitbake/lib/bb/__init__.py
index b96466e..4e2f97b 100644
--- a/poky/bitbake/lib/bb/__init__.py
+++ b/poky/bitbake/lib/bb/__init__.py
@@ -35,12 +35,14 @@
     def emit(self, record):
         pass
 
-Logger = logging.getLoggerClass()
-class BBLogger(Logger):
-    def __init__(self, name):
+class BBLoggerMixin(object):
+    def __init__(self, *args, **kwargs):
+        # Does nothing to allow calling super() from derived classes
+        pass
+
+    def setup_bblogger(self, name):
         if name.split(".")[0] == "BitBake":
             self.debug = self.bbdebug
-        Logger.__init__(self, name)
 
     def bbdebug(self, level, msg, *args, **kwargs):
         loglevel = logging.DEBUG - level + 1
@@ -60,16 +62,56 @@
     def verbnote(self, msg, *args, **kwargs):
         return self.log(logging.INFO + 2, msg, *args, **kwargs)
 
+Logger = logging.getLoggerClass()
+class BBLogger(Logger, BBLoggerMixin):
+    def __init__(self, name, *args, **kwargs):
+        self.setup_bblogger(name)
+        super().__init__(name, *args, **kwargs)
 
 logging.raiseExceptions = False
 logging.setLoggerClass(BBLogger)
 
+class BBLoggerAdapter(logging.LoggerAdapter, BBLoggerMixin):
+    def __init__(self, logger, *args, **kwargs):
+        self.setup_bblogger(logger.name)
+        super().__init__(logger, *args, **kwargs)
+
+    if sys.version_info < (3, 6):
+        # These properties were added in Python 3.6. Add them in older versions
+        # for compatibility
+        @property
+        def manager(self):
+            return self.logger.manager
+
+        @manager.setter
+        def manager(self, value):
+            self.logger.manager = value
+
+        @property
+        def name(self):
+            return self.logger.name
+
+        def __repr__(self):
+            logger = self.logger
+            level = getLevelName(logger.getEffectiveLevel())
+            return '<%s %s (%s)>' % (self.__class__.__name__, logger.name, level)
+
+logging.LoggerAdapter = BBLoggerAdapter
+
 logger = logging.getLogger("BitBake")
 logger.addHandler(NullHandler())
 logger.setLevel(logging.DEBUG - 2)
 
 mainlogger = logging.getLogger("BitBake.Main")
 
+class PrefixLoggerAdapter(logging.LoggerAdapter):
+    def __init__(self, prefix, logger):
+        super().__init__(logger, {})
+        self.__msg_prefix = prefix
+
+    def process(self, msg, kwargs):
+        return "%s%s" %(self.__msg_prefix, msg), kwargs
+
 # This has to be imported after the setLoggerClass, as the import of bb.msg
 # can result in construction of the various loggers.
 import bb.msg
diff --git a/poky/bitbake/lib/bb/cache.py b/poky/bitbake/lib/bb/cache.py
index d1be836..be5ea6a 100644
--- a/poky/bitbake/lib/bb/cache.py
+++ b/poky/bitbake/lib/bb/cache.py
@@ -19,16 +19,20 @@
 import os
 import logging
 import pickle
-from collections import defaultdict
+from collections import defaultdict, Mapping
 import bb.utils
+from bb import PrefixLoggerAdapter
 import re
 
 logger = logging.getLogger("BitBake.Cache")
 
 __cache_version__ = "152"
 
-def getCacheFile(path, filename, data_hash):
-    return os.path.join(path, filename + "." + data_hash)
+def getCacheFile(path, filename, mc, data_hash):
+    mcspec = ''
+    if mc:
+        mcspec = ".%s" % mc
+    return os.path.join(path, filename + mcspec + "." + data_hash)
 
 # RecipeInfoCommon defines common data retrieving methods
 # from meta data for caches. CoreRecipeInfo as well as other
@@ -324,7 +328,7 @@
         bb_data = self.load_bbfile(virtualfn, appends, virtonly=True)
         return bb_data[virtual]
 
-    def load_bbfile(self, bbfile, appends, virtonly = False):
+    def load_bbfile(self, bbfile, appends, virtonly = False, mc=None):
         """
         Load and parse one .bb build file
         Return the data and whether parsing resulted in the file being skipped
@@ -337,6 +341,10 @@
             datastores = parse_recipe(bb_data, bbfile, appends, mc)
             return datastores
 
+        if mc is not None:
+            bb_data = self.databuilder.mcdata[mc].createCopy()
+            return parse_recipe(bb_data, bbfile, appends, mc)
+
         bb_data = self.data.createCopy()
         datastores = parse_recipe(bb_data, bbfile, appends)
 
@@ -354,14 +362,15 @@
     """
     BitBake Cache implementation
     """
-
-    def __init__(self, databuilder, data_hash, caches_array):
+    def __init__(self, databuilder, mc, data_hash, caches_array):
         super().__init__(databuilder)
         data = databuilder.data
 
         # Pass caches_array information into Cache Constructor
         # It will be used later for deciding whether we
         # need extra cache file dump/load support
+        self.mc = mc
+        self.logger = PrefixLoggerAdapter("Cache: %s: " % (mc if mc else "default"), logger)
         self.caches_array = caches_array
         self.cachedir = data.getVar("CACHE")
         self.clean = set()
@@ -374,31 +383,47 @@
 
         if self.cachedir in [None, '']:
             self.has_cache = False
-            logger.info("Not using a cache. "
-                        "Set CACHE = <directory> to enable.")
+            self.logger.info("Not using a cache. "
+                             "Set CACHE = <directory> to enable.")
             return
 
         self.has_cache = True
-        self.cachefile = getCacheFile(self.cachedir, "bb_cache.dat", self.data_hash)
 
-        logger.debug(1, "Cache dir: %s", self.cachedir)
+    def getCacheFile(self, cachefile):
+        return getCacheFile(self.cachedir, cachefile, self.mc, self.data_hash)
+
+    def prepare_cache(self, progress):
+        if not self.has_cache:
+            return 0
+
+        loaded = 0
+
+        self.cachefile = self.getCacheFile("bb_cache.dat")
+
+        self.logger.debug(1, "Cache dir: %s", self.cachedir)
         bb.utils.mkdirhier(self.cachedir)
 
         cache_ok = True
         if self.caches_array:
             for cache_class in self.caches_array:
-                cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
-                cache_ok = cache_ok and os.path.exists(cachefile)
+                cachefile = self.getCacheFile(cache_class.cachefile)
+                cache_exists = os.path.exists(cachefile)
+                self.logger.debug(2, "Checking if %s exists: %r", cachefile, cache_exists)
+                cache_ok = cache_ok and cache_exists
                 cache_class.init_cacheData(self)
         if cache_ok:
-            self.load_cachefile()
+            loaded = self.load_cachefile(progress)
         elif os.path.isfile(self.cachefile):
-            logger.info("Out of date cache found, rebuilding...")
+            self.logger.info("Out of date cache found, rebuilding...")
         else:
-            logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
+            self.logger.debug(1, "Cache file %s not found, building..." % self.cachefile)
 
         # We don't use the symlink, its just for debugging convinience
-        symlink = os.path.join(self.cachedir, "bb_cache.dat")
+        if self.mc:
+            symlink = os.path.join(self.cachedir, "bb_cache.dat.%s" % self.mc)
+        else:
+            symlink = os.path.join(self.cachedir, "bb_cache.dat")
+
         if os.path.exists(symlink):
             bb.utils.remove(symlink)
         try:
@@ -406,22 +431,31 @@
         except OSError:
             pass
 
-    def load_cachefile(self):
+        return loaded
+
+    def cachesize(self):
+        if not self.has_cache:
+            return 0
+
         cachesize = 0
+        for cache_class in self.caches_array:
+            cachefile = self.getCacheFile(cache_class.cachefile)
+            try:
+                with open(cachefile, "rb") as cachefile:
+                    cachesize += os.fstat(cachefile.fileno()).st_size
+            except FileNotFoundError:
+                pass
+
+        return cachesize
+
+    def load_cachefile(self, progress):
+        cachesize = self.cachesize()
         previous_progress = 0
         previous_percent = 0
 
-        # Calculate the correct cachesize of all those cache files
         for cache_class in self.caches_array:
-            cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
-            with open(cachefile, "rb") as cachefile:
-                cachesize += os.fstat(cachefile.fileno()).st_size
-
-        bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
-
-        for cache_class in self.caches_array:
-            cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
-            logger.debug(1, 'Loading cache file: %s' % cachefile)
+            cachefile = self.getCacheFile(cache_class.cachefile)
+            self.logger.debug(1, 'Loading cache file: %s' % cachefile)
             with open(cachefile, "rb") as cachefile:
                 pickled = pickle.Unpickler(cachefile)
                 # Check cache version information
@@ -429,15 +463,15 @@
                     cache_ver = pickled.load()
                     bitbake_ver = pickled.load()
                 except Exception:
-                    logger.info('Invalid cache, rebuilding...')
-                    return
+                    self.logger.info('Invalid cache, rebuilding...')
+                    return 0
 
                 if cache_ver != __cache_version__:
-                    logger.info('Cache version mismatch, rebuilding...')
-                    return
+                    self.logger.info('Cache version mismatch, rebuilding...')
+                    return 0
                 elif bitbake_ver != bb.__version__:
-                    logger.info('Bitbake version mismatch, rebuilding...')
-                    return
+                    self.logger.info('Bitbake version mismatch, rebuilding...')
+                    return 0
 
                 # Load the rest of the cache file
                 current_progress = 0
@@ -460,29 +494,17 @@
                         self.depends_cache[key] = [value]
                     # only fire events on even percentage boundaries
                     current_progress = cachefile.tell() + previous_progress
-                    if current_progress > cachesize:
-                        # we might have calculated incorrect total size because a file
-                        # might've been written out just after we checked its size
-                        cachesize = current_progress
-                    current_percent = 100 * current_progress / cachesize
-                    if current_percent > previous_percent:
-                        previous_percent = current_percent
-                        bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
-                                      self.data)
+                    progress(cachefile.tell() + previous_progress)
 
                 previous_progress += current_progress
 
-        # Note: depends cache number is corresponding to the parsing file numbers.
-        # The same file has several caches, still regarded as one item in the cache
-        bb.event.fire(bb.event.CacheLoadCompleted(cachesize,
-                                                  len(self.depends_cache)),
-                      self.data)
+        return len(self.depends_cache)
 
     def parse(self, filename, appends):
         """Parse the specified filename, returning the recipe information"""
-        logger.debug(1, "Parsing %s", filename)
+        self.logger.debug(1, "Parsing %s", filename)
         infos = []
-        datastores = self.load_bbfile(filename, appends)
+        datastores = self.load_bbfile(filename, appends, mc=self.mc)
         depends = []
         variants = []
         # Process the "real" fn last so we can store variants list
@@ -534,7 +556,7 @@
         cached, infos = self.load(fn, appends)
         for virtualfn, info_array in infos:
             if info_array[0].skipped:
-                logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
+                self.logger.debug(1, "Skipping %s: %s", virtualfn, info_array[0].skipreason)
                 skipped += 1
             else:
                 self.add_info(virtualfn, info_array, cacheData, not cached)
@@ -570,21 +592,21 @@
 
         # File isn't in depends_cache
         if not fn in self.depends_cache:
-            logger.debug(2, "Cache: %s is not cached", fn)
+            self.logger.debug(2, "%s is not cached", fn)
             return False
 
         mtime = bb.parse.cached_mtime_noerror(fn)
 
         # Check file still exists
         if mtime == 0:
-            logger.debug(2, "Cache: %s no longer exists", fn)
+            self.logger.debug(2, "%s no longer exists", fn)
             self.remove(fn)
             return False
 
         info_array = self.depends_cache[fn]
         # Check the file's timestamp
         if mtime != info_array[0].timestamp:
-            logger.debug(2, "Cache: %s changed", fn)
+            self.logger.debug(2, "%s changed", fn)
             self.remove(fn)
             return False
 
@@ -595,14 +617,14 @@
                 fmtime = bb.parse.cached_mtime_noerror(f)
                 # Check if file still exists
                 if old_mtime != 0 and fmtime == 0:
-                    logger.debug(2, "Cache: %s's dependency %s was removed",
-                                    fn, f)
+                    self.logger.debug(2, "%s's dependency %s was removed",
+                                         fn, f)
                     self.remove(fn)
                     return False
 
                 if (fmtime != old_mtime):
-                    logger.debug(2, "Cache: %s's dependency %s changed",
-                                    fn, f)
+                    self.logger.debug(2, "%s's dependency %s changed",
+                                         fn, f)
                     self.remove(fn)
                     return False
 
@@ -618,14 +640,14 @@
                         continue
                     f, exist = f.split(":")
                     if (exist == "True" and not os.path.exists(f)) or (exist == "False" and os.path.exists(f)):
-                        logger.debug(2, "Cache: %s's file checksum list file %s changed",
-                                        fn, f)
+                        self.logger.debug(2, "%s's file checksum list file %s changed",
+                                             fn, f)
                         self.remove(fn)
                         return False
 
-        if appends != info_array[0].appends:
-            logger.debug(2, "Cache: appends for %s changed", fn)
-            logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
+        if tuple(appends) != tuple(info_array[0].appends):
+            self.logger.debug(2, "appends for %s changed", fn)
+            self.logger.debug(2, "%s to %s" % (str(appends), str(info_array[0].appends)))
             self.remove(fn)
             return False
 
@@ -634,10 +656,10 @@
             virtualfn = variant2virtual(fn, cls)
             self.clean.add(virtualfn)
             if virtualfn not in self.depends_cache:
-                logger.debug(2, "Cache: %s is not cached", virtualfn)
+                self.logger.debug(2, "%s is not cached", virtualfn)
                 invalid = True
             elif len(self.depends_cache[virtualfn]) != len(self.caches_array):
-                logger.debug(2, "Cache: Extra caches missing for %s?" % virtualfn)
+                self.logger.debug(2, "Extra caches missing for %s?" % virtualfn)
                 invalid = True
 
         # If any one of the variants is not present, mark as invalid for all
@@ -645,10 +667,10 @@
             for cls in info_array[0].variants:
                 virtualfn = variant2virtual(fn, cls)
                 if virtualfn in self.clean:
-                    logger.debug(2, "Cache: Removing %s from cache", virtualfn)
+                    self.logger.debug(2, "Removing %s from cache", virtualfn)
                     self.clean.remove(virtualfn)
             if fn in self.clean:
-                logger.debug(2, "Cache: Marking %s as not clean", fn)
+                self.logger.debug(2, "Marking %s as not clean", fn)
                 self.clean.remove(fn)
             return False
 
@@ -661,10 +683,10 @@
         Called from the parser in error cases
         """
         if fn in self.depends_cache:
-            logger.debug(1, "Removing %s from cache", fn)
+            self.logger.debug(1, "Removing %s from cache", fn)
             del self.depends_cache[fn]
         if fn in self.clean:
-            logger.debug(1, "Marking %s as unclean", fn)
+            self.logger.debug(1, "Marking %s as unclean", fn)
             self.clean.remove(fn)
 
     def sync(self):
@@ -677,12 +699,13 @@
             return
 
         if self.cacheclean:
-            logger.debug(2, "Cache is clean, not saving.")
+            self.logger.debug(2, "Cache is clean, not saving.")
             return
 
         for cache_class in self.caches_array:
             cache_class_name = cache_class.__name__
-            cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
+            cachefile = self.getCacheFile(cache_class.cachefile)
+            self.logger.debug(2, "Writing %s", cachefile)
             with open(cachefile, "wb") as f:
                 p = pickle.Pickler(f, pickle.HIGHEST_PROTOCOL)
                 p.dump(__cache_version__)
@@ -701,8 +724,18 @@
         return bb.parse.cached_mtime_noerror(cachefile)
 
     def add_info(self, filename, info_array, cacheData, parsed=None, watcher=None):
+        if self.mc is not None:
+            (fn, cls, mc) = virtualfn2realfn(filename)
+            if mc:
+                self.logger.error("Unexpected multiconfig %s", virtualfn)
+                return
+
+            vfn = realfn2virtual(fn, cls, self.mc)
+        else:
+            vfn = filename
+
         if isinstance(info_array[0], CoreRecipeInfo) and (not info_array[0].skipped):
-            cacheData.add_from_recipeinfo(filename, info_array)
+            cacheData.add_from_recipeinfo(vfn, info_array)
 
             if watcher:
                 watcher(info_array[0].file_depends)
@@ -727,6 +760,65 @@
             info_array.append(cache_class(realfn, data))
         self.add_info(file_name, info_array, cacheData, parsed)
 
+class MulticonfigCache(Mapping):
+    def __init__(self, databuilder, data_hash, caches_array):
+        def progress(p):
+            nonlocal current_progress
+            nonlocal previous_progress
+            nonlocal previous_percent
+            nonlocal cachesize
+
+            current_progress = previous_progress + p
+
+            if current_progress > cachesize:
+                # we might have calculated incorrect total size because a file
+                # might've been written out just after we checked its size
+                cachesize = current_progress
+            current_percent = 100 * current_progress / cachesize
+            if current_percent > previous_percent:
+                previous_percent = current_percent
+                bb.event.fire(bb.event.CacheLoadProgress(current_progress, cachesize),
+                                databuilder.data)
+
+
+        cachesize = 0
+        current_progress = 0
+        previous_progress = 0
+        previous_percent = 0
+        self.__caches = {}
+
+        for mc, mcdata in databuilder.mcdata.items():
+            self.__caches[mc] = Cache(databuilder, mc, data_hash, caches_array)
+
+            cachesize += self.__caches[mc].cachesize()
+
+        bb.event.fire(bb.event.CacheLoadStarted(cachesize), databuilder.data)
+        loaded = 0
+
+        for c in self.__caches.values():
+            loaded += c.prepare_cache(progress)
+            previous_progress = current_progress
+
+        # Note: depends cache number is corresponding to the parsing file numbers.
+        # The same file has several caches, still regarded as one item in the cache
+        bb.event.fire(bb.event.CacheLoadCompleted(cachesize, loaded), databuilder.data)
+
+    def __len__(self):
+        return len(self.__caches)
+
+    def __getitem__(self, key):
+        return self.__caches[key]
+
+    def __contains__(self, key):
+        return key in self.__caches
+
+    def __iter__(self):
+        for k in self.__caches:
+            yield k
+
+    def keys(self):
+        return self.__caches[key]
+
 
 def init(cooker):
     """
diff --git a/poky/bitbake/lib/bb/command.py b/poky/bitbake/lib/bb/command.py
index 6abf386..3902ccc 100644
--- a/poky/bitbake/lib/bb/command.py
+++ b/poky/bitbake/lib/bb/command.py
@@ -138,12 +138,6 @@
     def reset(self):
         self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
 
-def split_mc_pn(pn):
-    if pn.startswith("multiconfig:"):
-        _, mc, pn = pn.split(":", 2)
-        return (mc, pn)
-    return ('', pn)
-
 class CommandsSync:
     """
     A class of synchronous commands
@@ -232,7 +226,11 @@
 
     def matchFile(self, command, params):
         fMatch = params[0]
-        return command.cooker.matchFile(fMatch)
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return command.cooker.matchFile(fMatch, mc)
     matchFile.needconfig = False
 
     def getUIHandlerNum(self, command, params):
@@ -395,22 +393,38 @@
     def getSkippedRecipes(self, command, params):
         # Return list sorted by reverse priority order
         import bb.cache
-        skipdict = OrderedDict(sorted(command.cooker.skiplist.items(),
-                                      key=lambda x: (-command.cooker.collection.calc_bbfile_priority(bb.cache.virtualfn2realfn(x[0])[0]), x[0])))
+        def sortkey(x):
+            vfn, _ = x
+            realfn, _, mc = bb.cache.virtualfn2realfn(vfn)
+            return (-command.cooker.collections[mc].calc_bbfile_priority(realfn), vfn)
+
+        skipdict = OrderedDict(sorted(command.cooker.skiplist.items(), key=sortkey))
         return list(skipdict.items())
     getSkippedRecipes.readonly = True
 
     def getOverlayedRecipes(self, command, params):
-        return list(command.cooker.collection.overlayed.items())
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return list(command.cooker.collections[mc].overlayed.items())
     getOverlayedRecipes.readonly = True
 
     def getFileAppends(self, command, params):
         fn = params[0]
-        return command.cooker.collection.get_file_appends(fn)
+        try:
+            mc = params[1]
+        except IndexError:
+            mc = ''
+        return command.cooker.collections[mc].get_file_appends(fn)
     getFileAppends.readonly = True
 
     def getAllAppends(self, command, params):
-        return command.cooker.collection.bbappends
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return command.cooker.collections[mc].bbappends
     getAllAppends.readonly = True
 
     def findProviders(self, command, params):
@@ -422,7 +436,7 @@
     findProviders.readonly = True
 
     def findBestProvider(self, command, params):
-        (mc, pn) = split_mc_pn(params[0])
+        (mc, pn) = bb.runqueue.split_mc(params[0])
         return command.cooker.findBestProvider(pn, mc)
     findBestProvider.readonly = True
 
@@ -496,6 +510,7 @@
         for the recipe.
         """
         fn = params[0]
+        mc = bb.runqueue.mc_from_tid(fn)
         appends = params[1]
         appendlist = params[2]
         if len(params) > 3:
@@ -507,7 +522,7 @@
             if appendlist is not None:
                 appendfiles = appendlist
             else:
-                appendfiles = command.cooker.collection.get_file_appends(fn)
+                appendfiles = command.cooker.collections[mc].get_file_appends(fn)
         else:
             appendfiles = []
         # We are calling bb.cache locally here rather than on the server,
@@ -517,7 +532,7 @@
         if config_data:
             # We have to use a different function here if we're passing in a datastore
             # NOTE: we took a copy above, so we don't do it here again
-            envdata = bb.cache.parse_recipe(config_data, fn, appendfiles)['']
+            envdata = bb.cache.parse_recipe(config_data, fn, appendfiles, mc)['']
         else:
             # Use the standard path
             parser = bb.cache.NoCache(command.cooker.databuilder)
diff --git a/poky/bitbake/lib/bb/cooker.py b/poky/bitbake/lib/bb/cooker.py
index e527e23..effd024 100644
--- a/poky/bitbake/lib/bb/cooker.py
+++ b/poky/bitbake/lib/bb/cooker.py
@@ -525,7 +525,7 @@
             self.parseConfiguration()
 
             fn, cls, mc = bb.cache.virtualfn2realfn(buildfile)
-            fn = self.matchFile(fn)
+            fn = self.matchFile(fn, mc)
             fn = bb.cache.realfn2virtual(fn, cls, mc)
         elif len(pkgs_to_build) == 1:
             mc = mc_base(pkgs_to_build[0])
@@ -541,8 +541,8 @@
 
         if fn:
             try:
-                bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
-                envdata = bb_cache.loadDataFull(fn, self.collection.get_file_appends(fn))
+                bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.data_hash, self.caches_array)
+                envdata = bb_caches[mc].loadDataFull(fn, self.collections[mc].get_file_appends(fn))
             except Exception as e:
                 parselog.exception("Unable to read %s", fn)
                 raise
@@ -929,26 +929,33 @@
         logger.info("Task dependencies saved to 'task-depends.dot'")
 
     def show_appends_with_no_recipes(self):
+        appends_without_recipes = {}
         # Determine which bbappends haven't been applied
+        for mc in self.multiconfigs:
+            # First get list of recipes, including skipped
+            recipefns = list(self.recipecaches[mc].pkg_fn.keys())
+            recipefns.extend(self.skiplist.keys())
 
-        # First get list of recipes, including skipped
-        recipefns = list(self.recipecaches[''].pkg_fn.keys())
-        recipefns.extend(self.skiplist.keys())
+            # Work out list of bbappends that have been applied
+            applied_appends = []
+            for fn in recipefns:
+                applied_appends.extend(self.collections[mc].get_file_appends(fn))
 
-        # Work out list of bbappends that have been applied
-        applied_appends = []
-        for fn in recipefns:
-            applied_appends.extend(self.collection.get_file_appends(fn))
+            appends_without_recipes[mc] = []
+            for _, appendfn in self.collections[mc].bbappends:
+                if not appendfn in applied_appends:
+                    appends_without_recipes[mc].append(appendfn)
 
-        appends_without_recipes = []
-        for _, appendfn in self.collection.bbappends:
-            if not appendfn in applied_appends:
-                appends_without_recipes.append(appendfn)
+        msgs = []
+        for mc in sorted(appends_without_recipes.keys()):
+            if appends_without_recipes[mc]:
+                msgs.append('No recipes in %s available for:\n  %s' % (mc if mc else 'default',
+                                                                        '\n  '.join(appends_without_recipes[mc])))
 
-        if appends_without_recipes:
-            msg = 'No recipes available for:\n  %s' % '\n  '.join(appends_without_recipes)
-            warn_only = self.data.getVar("BB_DANGLINGAPPENDS_WARNONLY", \
-                 False) or "no"
+        if msgs:
+            msg = "\n".join(msgs)
+            warn_only = self.databuilder.mcdata[mc].getVar("BB_DANGLINGAPPENDS_WARNONLY", \
+                False) or "no"
             if warn_only.lower() in ("1", "yes", "true"):
                 bb.warn(msg)
             else:
@@ -1249,15 +1256,15 @@
         if siggen_cache:
             bb.parse.siggen.checksum_cache.mtime_cache.clear()
 
-    def matchFiles(self, bf):
+    def matchFiles(self, bf, mc=''):
         """
         Find the .bb files which match the expression in 'buildfile'.
         """
         if bf.startswith("/") or bf.startswith("../"):
             bf = os.path.abspath(bf)
 
-        self.collection = CookerCollectFiles(self.bbfile_config_priorities)
-        filelist, masked, searchdirs = self.collection.collect_bbfiles(self.data, self.data)
+        self.collections = {mc: CookerCollectFiles(self.bbfile_config_priorities, mc)}
+        filelist, masked, searchdirs = self.collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
         try:
             os.stat(bf)
             bf = os.path.abspath(bf)
@@ -1270,12 +1277,12 @@
                     matches.append(f)
             return matches
 
-    def matchFile(self, buildfile):
+    def matchFile(self, buildfile, mc=''):
         """
         Find the .bb file which matches the expression in 'buildfile'.
         Raise an error if multiple files
         """
-        matches = self.matchFiles(buildfile)
+        matches = self.matchFiles(buildfile, mc)
         if len(matches) != 1:
             if matches:
                 msg = "Unable to match '%s' to a specific recipe file - %s matches found:" % (buildfile, len(matches))
@@ -1316,14 +1323,14 @@
             task = "do_%s" % task
 
         fn, cls, mc = bb.cache.virtualfn2realfn(buildfile)
-        fn = self.matchFile(fn)
+        fn = self.matchFile(fn, mc)
 
         self.buildSetVars()
         self.reset_mtime_caches()
 
-        bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
+        bb_caches = bb.cache.MulticonfigCache(self.databuilder, self.data_hash, self.caches_array)
 
-        infos = bb_cache.parse(fn, self.collection.get_file_appends(fn))
+        infos = bb_caches[mc].parse(fn, self.collections[mc].get_file_appends(fn))
         infos = dict(infos)
 
         fn = bb.cache.realfn2virtual(fn, cls, mc)
@@ -1552,14 +1559,24 @@
                 for dep in self.configuration.extra_assume_provided:
                     self.recipecaches[mc].ignored_dependencies.add(dep)
 
-            self.collection = CookerCollectFiles(self.bbfile_config_priorities)
-            (filelist, masked, searchdirs) = self.collection.collect_bbfiles(self.data, self.data)
+            self.collections = {}
+
+            mcfilelist = {}
+            total_masked = 0
+            searchdirs = set()
+            for mc in self.multiconfigs:
+                self.collections[mc] = CookerCollectFiles(self.bbfile_config_priorities, mc)
+                (filelist, masked, search) = self.collections[mc].collect_bbfiles(self.databuilder.mcdata[mc], self.databuilder.mcdata[mc])
+
+                mcfilelist[mc] = filelist
+                total_masked += masked
+                searchdirs |= set(search)
 
             # Add inotify watches for directories searched for bb/bbappend files
             for dirent in searchdirs:
                 self.add_filewatch([[dirent]], dirs=True)
 
-            self.parser = CookerParser(self, filelist, masked)
+            self.parser = CookerParser(self, mcfilelist, total_masked)
             self.parsecache_valid = True
 
         self.state = state.parsing
@@ -1571,7 +1588,7 @@
             self.show_appends_with_no_recipes()
             self.handlePrefProviders()
             for mc in self.multiconfigs:
-                self.recipecaches[mc].bbfile_priority = self.collection.collection_priorities(self.recipecaches[mc].pkg_fn, self.data)
+                self.recipecaches[mc].bbfile_priority = self.collections[mc].collection_priorities(self.recipecaches[mc].pkg_fn, self.data)
             self.state = state.running
 
             # Send an event listing all stamps reachable after parsing
@@ -1679,7 +1696,8 @@
 
 
 class CookerCollectFiles(object):
-    def __init__(self, priorities):
+    def __init__(self, priorities, mc=''):
+        self.mc = mc
         self.bbappends = []
         # Priorities is a list of tupples, with the second element as the pattern.
         # We need to sort the list with the longest pattern first, and so on to
@@ -1846,7 +1864,7 @@
             (bbappend, filename) = b
             if (bbappend == f) or ('%' in bbappend and bbappend.startswith(f[:bbappend.index('%')])):
                 filelist.append(filename)
-        return filelist
+        return tuple(filelist)
 
     def collection_priorities(self, pkgfns, d):
 
@@ -1882,7 +1900,8 @@
         for collection, pattern, regex, _ in self.bbfile_config_priorities:
             if regex in unmatched:
                 if d.getVar('BBFILE_PATTERN_IGNORE_EMPTY_%s' % collection) != '1':
-                    collectlog.warning("No bb files matched BBFILE_PATTERN_%s '%s'" % (collection, pattern))
+                    collectlog.warning("No bb files in %s matched BBFILE_PATTERN_%s '%s'" % (self.mc if self.mc else 'default',
+                                                                                             collection, pattern))
 
         return priorities
 
@@ -1949,7 +1968,7 @@
             except queue.Full:
                 pending.append(result)
 
-    def parse(self, filename, appends):
+    def parse(self, mc, cache, filename, appends):
         try:
             origfilter = bb.event.LogHandler.filter
             # Record the filename we're parsing into any events generated
@@ -1963,7 +1982,7 @@
             bb.event.set_class_handlers(self.handlers.copy())
             bb.event.LogHandler.filter = parse_filter
 
-            return True, self.bb_cache.parse(filename, appends)
+            return True, mc, cache.parse(filename, appends)
         except Exception as exc:
             tb = sys.exc_info()[2]
             exc.recipe = filename
@@ -1978,8 +1997,8 @@
             bb.event.LogHandler.filter = origfilter
 
 class CookerParser(object):
-    def __init__(self, cooker, filelist, masked):
-        self.filelist = filelist
+    def __init__(self, cooker, mcfilelist, masked):
+        self.mcfilelist = mcfilelist
         self.cooker = cooker
         self.cfgdata = cooker.data
         self.cfghash = cooker.data_hash
@@ -1993,25 +2012,27 @@
 
         self.skipped = 0
         self.virtuals = 0
-        self.total = len(filelist)
 
         self.current = 0
         self.process_names = []
 
-        self.bb_cache = bb.cache.Cache(self.cfgbuilder, self.cfghash, cooker.caches_array)
-        self.fromcache = []
-        self.willparse = []
-        for filename in self.filelist:
-            appends = self.cooker.collection.get_file_appends(filename)
-            if not self.bb_cache.cacheValid(filename, appends):
-                self.willparse.append((filename, appends))
-            else:
-                self.fromcache.append((filename, appends))
-        self.toparse = self.total - len(self.fromcache)
+        self.bb_caches = bb.cache.MulticonfigCache(self.cfgbuilder, self.cfghash, cooker.caches_array)
+        self.fromcache = set()
+        self.willparse = set()
+        for mc in self.cooker.multiconfigs:
+            for filename in self.mcfilelist[mc]:
+                appends = self.cooker.collections[mc].get_file_appends(filename)
+                if not self.bb_caches[mc].cacheValid(filename, appends):
+                    self.willparse.add((mc, self.bb_caches[mc], filename, appends))
+                else:
+                    self.fromcache.add((mc, self.bb_caches[mc], filename, appends))
+
+        self.total = len(self.fromcache) + len(self.willparse)
+        self.toparse = len(self.willparse)
         self.progress_chunk = int(max(self.toparse / 100, 1))
 
         self.num_processes = min(int(self.cfgdata.getVar("BB_NUMBER_PARSE_THREADS") or
-                                 multiprocessing.cpu_count()), len(self.willparse))
+                                 multiprocessing.cpu_count()), self.toparse)
 
         self.start()
         self.haveshutdown = False
@@ -2022,7 +2043,6 @@
         if self.toparse:
             bb.event.fire(bb.event.ParseStarted(self.toparse), self.cfgdata)
             def init():
-                Parser.bb_cache = self.bb_cache
                 bb.utils.set_process_name(multiprocessing.current_process().name)
                 multiprocessing.util.Finalize(None, bb.codeparser.parser_cache_save, exitpriority=1)
                 multiprocessing.util.Finalize(None, bb.fetch.fetcher_parse_save, exitpriority=1)
@@ -2032,7 +2052,7 @@
 
             def chunkify(lst,n):
                 return [lst[i::n] for i in range(n)]
-            self.jobs = chunkify(self.willparse, self.num_processes)
+            self.jobs = chunkify(list(self.willparse), self.num_processes)
 
             for i in range(0, self.num_processes):
                 parser = Parser(self.jobs[i], self.result_queue, self.parser_quit, init, self.cooker.configuration.profile)
@@ -2078,7 +2098,11 @@
             else:
                 process.join()
 
-        sync = threading.Thread(target=self.bb_cache.sync)
+        def sync_caches():
+            for c in self.bb_caches.values():
+                c.sync()
+
+        sync = threading.Thread(target=sync_caches)
         sync.start()
         multiprocessing.util.Finalize(None, sync.join, exitpriority=-100)
         bb.codeparser.parser_cache_savemerge()
@@ -2095,9 +2119,9 @@
             print("Processed parsing statistics saved to %s" % (pout))
 
     def load_cached(self):
-        for filename, appends in self.fromcache:
-            cached, infos = self.bb_cache.load(filename, appends)
-            yield not cached, infos
+        for mc, cache, filename, appends in self.fromcache:
+            cached, infos = cache.load(filename, appends)
+            yield not cached, mc, infos
 
     def parse_generator(self):
         while True:
@@ -2119,7 +2143,7 @@
         result = []
         parsed = None
         try:
-            parsed, result = next(self.results)
+            parsed, mc, result = next(self.results)
         except StopIteration:
             self.shutdown()
             return False
@@ -2175,13 +2199,16 @@
             if info_array[0].skipped:
                 self.skipped += 1
                 self.cooker.skiplist[virtualfn] = SkippedPackage(info_array[0])
-            (fn, cls, mc) = bb.cache.virtualfn2realfn(virtualfn)
-            self.bb_cache.add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
+            self.bb_caches[mc].add_info(virtualfn, info_array, self.cooker.recipecaches[mc],
                                         parsed=parsed, watcher = self.cooker.add_filewatch)
         return True
 
     def reparse(self, filename):
-        infos = self.bb_cache.parse(filename, self.cooker.collection.get_file_appends(filename))
-        for vfn, info_array in infos:
-            (fn, cls, mc) = bb.cache.virtualfn2realfn(vfn)
-            self.cooker.recipecaches[mc].add_from_recipeinfo(vfn, info_array)
+        to_reparse = set()
+        for mc in self.cooker.multiconfigs:
+            to_reparse.add((mc, filename, self.cooker.collections[mc].get_file_appends(filename)))
+
+        for mc, filename, appends in to_reparse:
+            infos = self.bb_caches[mc].parse(filename, appends)
+            for vfn, info_array in infos:
+                self.cooker.recipecaches[mc].add_from_recipeinfo(vfn, info_array)
diff --git a/poky/bitbake/lib/bb/runqueue.py b/poky/bitbake/lib/bb/runqueue.py
index 16f076f..adb34a8 100644
--- a/poky/bitbake/lib/bb/runqueue.py
+++ b/poky/bitbake/lib/bb/runqueue.py
@@ -46,6 +46,12 @@
     (mc, fn, taskname, _) = split_tid_mcfn(tid)
     return (mc, fn, taskname)
 
+def split_mc(n):
+    if n.startswith("mc:"):
+        _, mc, n = n.split(":", 2)
+        return (mc, n)
+    return ('', n)
+
 def split_tid_mcfn(tid):
     if tid.startswith('mc:'):
         elems = tid.split(':')
@@ -1184,8 +1190,9 @@
         return len(self.runtaskentries)
 
     def prepare_task_hash(self, tid):
-        bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches[mc_from_tid(tid)])
-        self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, self.dataCaches[mc_from_tid(tid)])
+        dc = bb.parse.siggen.get_data_caches(self.dataCaches, mc_from_tid(tid))
+        bb.parse.siggen.prep_taskhash(tid, self.runtaskentries[tid].depends, dc)
+        self.runtaskentries[tid].hash = bb.parse.siggen.get_taskhash(tid, self.runtaskentries[tid].depends, dc)
         self.runtaskentries[tid].unihash = bb.parse.siggen.get_unihash(tid)
 
     def dump_data(self):
@@ -1557,7 +1564,8 @@
 
     def rq_dump_sigfn(self, fn, options):
         bb_cache = bb.cache.NoCache(self.cooker.databuilder)
-        the_data = bb_cache.loadDataFull(fn, self.cooker.collection.get_file_appends(fn))
+        mc = bb.runqueue.mc_from_tid(fn)
+        the_data = bb_cache.loadDataFull(fn, self.cooker.collections[mc].get_file_appends(fn))
         siggen = bb.parse.siggen
         dataCaches = self.rqdata.dataCaches
         siggen.dump_sigfn(fn, dataCaches, options)
@@ -2042,10 +2050,10 @@
             if 'fakeroot' in taskdep and taskname in taskdep['fakeroot'] and not self.cooker.configuration.dry_run:
                 if not mc in self.rq.fakeworker:
                     self.rq.start_fakeworker(self, mc)
-                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collection.get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
+                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
                 self.rq.fakeworker[mc].process.stdin.flush()
             else:
-                self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collection.get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
+                self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, True, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, False)) + b"</runtask>")
                 self.rq.worker[mc].process.stdin.flush()
 
             self.build_stamps[task] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
@@ -2129,10 +2137,10 @@
                         self.rq.state = runQueueFailed
                         self.stats.taskFailed()
                         return True
-                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, False, self.cooker.collection.get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
+                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, False, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
                 self.rq.fakeworker[mc].process.stdin.flush()
             else:
-                self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, False, self.cooker.collection.get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
+                self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, taskhash, unihash, False, self.cooker.collections[mc].get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
                 self.rq.worker[mc].process.stdin.flush()
 
             self.build_stamps[task] = bb.build.stampfile(taskname, self.rqdata.dataCaches[mc], taskfn, noextra=True)
@@ -2298,7 +2306,8 @@
                 if len(self.rqdata.runtaskentries[p].depends) and not self.rqdata.runtaskentries[tid].depends.isdisjoint(total):
                     continue
                 orighash = self.rqdata.runtaskentries[tid].hash
-                newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, self.rqdata.dataCaches[mc_from_tid(tid)])
+                dc = bb.parse.siggen.get_data_caches(self.rqdata.dataCaches, mc_from_tid(tid))
+                newhash = bb.parse.siggen.get_taskhash(tid, self.rqdata.runtaskentries[tid].depends, dc)
                 origuni = self.rqdata.runtaskentries[tid].unihash
                 newuni = bb.parse.siggen.get_unihash(tid)
                 # FIXME, need to check it can come from sstate at all for determinism?
diff --git a/poky/bitbake/lib/bb/siggen.py b/poky/bitbake/lib/bb/siggen.py
index 4c8d81c..872333d 100644
--- a/poky/bitbake/lib/bb/siggen.py
+++ b/poky/bitbake/lib/bb/siggen.py
@@ -38,6 +38,11 @@
     """
     name = "noop"
 
+    # If the derived class supports multiconfig datacaches, set this to True
+    # The default is False for backward compatibility with derived signature
+    # generators that do not understand multiconfig caches
+    supports_multiconfig_datacaches = False
+
     def __init__(self, data):
         self.basehash = {}
         self.taskhash = {}
@@ -58,10 +63,10 @@
     def get_unihash(self, tid):
         return self.taskhash[tid]
 
-    def prep_taskhash(self, tid, deps, dataCache):
+    def prep_taskhash(self, tid, deps, dataCaches):
         return
 
-    def get_taskhash(self, tid, deps, dataCache):
+    def get_taskhash(self, tid, deps, dataCaches):
         self.taskhash[tid] = hashlib.sha256(tid.encode("utf-8")).hexdigest()
         return self.taskhash[tid]
 
@@ -105,6 +110,38 @@
     def set_setscene_tasks(self, setscene_tasks):
         return
 
+    @classmethod
+    def get_data_caches(cls, dataCaches, mc):
+        """
+        This function returns the datacaches that should be passed to signature
+        generator functions. If the signature generator supports multiconfig
+        caches, the entire dictionary of data caches is sent, otherwise a
+        special proxy is sent that support both index access to all
+        multiconfigs, and also direct access for the default multiconfig.
+
+        The proxy class allows code in this class itself to always use
+        multiconfig aware code (to ease maintenance), but derived classes that
+        are unaware of multiconfig data caches can still access the default
+        multiconfig as expected.
+
+        Do not override this function in derived classes; it will be removed in
+        the future when support for multiconfig data caches is mandatory
+        """
+        class DataCacheProxy(object):
+            def __init__(self):
+                pass
+
+            def __getitem__(self, key):
+                return dataCaches[key]
+
+            def __getattr__(self, name):
+                return getattr(dataCaches[mc], name)
+
+        if cls.supports_multiconfig_datacaches:
+            return dataCaches
+
+        return DataCacheProxy()
+
 class SignatureGeneratorBasic(SignatureGenerator):
     """
     """
@@ -200,7 +237,7 @@
         self.lookupcache = {}
         self.taskdeps = {}
 
-    def rundep_check(self, fn, recipename, task, dep, depname, dataCache):
+    def rundep_check(self, fn, recipename, task, dep, depname, dataCaches):
         # Return True if we should keep the dependency, False to drop it
         # We only manipulate the dependencies for packages not in the whitelist
         if self.twl and not self.twl.search(recipename):
@@ -218,37 +255,40 @@
             pass
         return taint
 
-    def prep_taskhash(self, tid, deps, dataCache):
+    def prep_taskhash(self, tid, deps, dataCaches):
 
         (mc, _, task, fn) = bb.runqueue.split_tid_mcfn(tid)
 
-        self.basehash[tid] = dataCache.basetaskhash[tid]
+        self.basehash[tid] = dataCaches[mc].basetaskhash[tid]
         self.runtaskdeps[tid] = []
         self.file_checksum_values[tid] = []
-        recipename = dataCache.pkg_fn[fn]
+        recipename = dataCaches[mc].pkg_fn[fn]
 
         self.tidtopn[tid] = recipename
 
         for dep in sorted(deps, key=clean_basepath):
-            (depmc, _, deptaskname, depfn) = bb.runqueue.split_tid_mcfn(dep)
-            if mc != depmc:
+            (depmc, _, _, depmcfn) = bb.runqueue.split_tid_mcfn(dep)
+            depname = dataCaches[depmc].pkg_fn[depmcfn]
+            if not self.supports_multiconfig_datacaches and mc != depmc:
+                # If the signature generator doesn't understand multiconfig
+                # data caches, any dependency not in the same multiconfig must
+                # be skipped for backward compatibility
                 continue
-            depname = dataCache.pkg_fn[depfn]
-            if not self.rundep_check(fn, recipename, task, dep, depname, dataCache):
+            if not self.rundep_check(fn, recipename, task, dep, depname, dataCaches):
                 continue
             if dep not in self.taskhash:
                 bb.fatal("%s is not in taskhash, caller isn't calling in dependency order?" % dep)
             self.runtaskdeps[tid].append(dep)
 
-        if task in dataCache.file_checksums[fn]:
+        if task in dataCaches[mc].file_checksums[fn]:
             if self.checksum_cache:
-                checksums = self.checksum_cache.get_checksums(dataCache.file_checksums[fn][task], recipename, self.localdirsexclude)
+                checksums = self.checksum_cache.get_checksums(dataCaches[mc].file_checksums[fn][task], recipename, self.localdirsexclude)
             else:
-                checksums = bb.fetch2.get_file_checksums(dataCache.file_checksums[fn][task], recipename, self.localdirsexclude)
+                checksums = bb.fetch2.get_file_checksums(dataCaches[mc].file_checksums[fn][task], recipename, self.localdirsexclude)
             for (f,cs) in checksums:
                 self.file_checksum_values[tid].append((f,cs))
 
-        taskdep = dataCache.task_deps[fn]
+        taskdep = dataCaches[mc].task_deps[fn]
         if 'nostamp' in taskdep and task in taskdep['nostamp']:
             # Nostamp tasks need an implicit taint so that they force any dependent tasks to run
             if tid in self.taints and self.taints[tid].startswith("nostamp:"):
@@ -259,14 +299,14 @@
                 taint = str(uuid.uuid4())
                 self.taints[tid] = "nostamp:" + taint
 
-        taint = self.read_taint(fn, task, dataCache.stamp[fn])
+        taint = self.read_taint(fn, task, dataCaches[mc].stamp[fn])
         if taint:
             self.taints[tid] = taint
             logger.warning("%s is tainted from a forced run" % tid)
 
         return
 
-    def get_taskhash(self, tid, deps, dataCache):
+    def get_taskhash(self, tid, deps, dataCaches):
 
         data = self.basehash[tid]
         for dep in self.runtaskdeps[tid]:
@@ -640,6 +680,12 @@
         self.server = data.getVar('BB_HASHSERVE')
         self.method = "sstate_output_hash"
 
+#
+# Dummy class used for bitbake-selftest
+#
+class SignatureGeneratorTestMulticonfigDepends(SignatureGeneratorBasicHash):
+    name = "TestMulticonfigDepends"
+    supports_multiconfig_datacaches = True
 
 def dump_this_task(outfile, d):
     import bb.parse
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
index 5e451fc..efebf00 100644
--- a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/bitbake.conf
@@ -1,7 +1,8 @@
 CACHE = "${TOPDIR}/cache"
 THISDIR = "${@os.path.dirname(d.getVar('FILE'))}"
 COREBASE := "${@os.path.normpath(os.path.dirname(d.getVar('FILE')+'/../../'))}"
-BBFILES = "${COREBASE}/recipes/*.bb"
+EXTRA_BBFILES ?= ""
+BBFILES = "${COREBASE}/recipes/*.bb ${EXTRA_BBFILES}"
 PROVIDES = "${PN}"
 PN = "${@bb.parse.vars_from_file(d.getVar('FILE', False),d)[0]}"
 PF = "${BB_CURRENT_MC}:${PN}"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf
index ecf23e1..f34b8dc 100644
--- a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc1.conf
@@ -1 +1,2 @@
 TMPDIR = "${TOPDIR}/mc1/"
+BBMASK += "recipes/fails-mc/fails-mc1.bb"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf
index eef338e..c3360fc 100644
--- a/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/conf/multiconfig/mc2.conf
@@ -1 +1,2 @@
 TMPDIR = "${TOPDIR}/mc2/"
+BBMASK += "recipes/fails-mc/fails-mc2.bb"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/f1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/f1.bb
new file mode 100644
index 0000000..d45a4cf
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/f1.bb
@@ -0,0 +1 @@
+do_install[mcdepends] = "mc:mc1:mc2:a1:do_build"
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/fails-mc/fails-mc1.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/fails-mc/fails-mc1.bb
new file mode 100644
index 0000000..17a181f
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/fails-mc/fails-mc1.bb
@@ -0,0 +1,5 @@
+python () {
+    if d.getVar("BB_CURRENT_MC") == "mc1":
+        bb.fatal("Multiconfig is mc1")
+}
+
diff --git a/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/fails-mc/fails-mc2.bb b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/fails-mc/fails-mc2.bb
new file mode 100644
index 0000000..cc69e7b
--- /dev/null
+++ b/poky/bitbake/lib/bb/tests/runqueue-tests/recipes/fails-mc/fails-mc2.bb
@@ -0,0 +1,4 @@
+python () {
+    if d.getVar("BB_CURRENT_MC") == "mc2":
+        bb.fatal("Multiconfig is mc2")
+}
diff --git a/poky/bitbake/lib/bb/tests/runqueue.py b/poky/bitbake/lib/bb/tests/runqueue.py
index 4ba12a0..d3d62b9 100644
--- a/poky/bitbake/lib/bb/tests/runqueue.py
+++ b/poky/bitbake/lib/bb/tests/runqueue.py
@@ -232,6 +232,51 @@
                 expected.remove(x)
             self.assertEqual(set(tasks), set(expected))
 
+    def test_multiconfig_bbmask(self):
+        # This test validates that multiconfigs can independently mask off
+        # recipes they do not want with BBMASK. It works by having recipes
+        # that will fail to parse for mc1 and mc2, then making each multiconfig
+        # build the one that does parse. This ensures that the recipes are in
+        # each multiconfigs BBFILES, but each is masking only the one that
+        # doesn't parse
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            extraenv = {
+                "BBMULTICONFIG" : "mc1 mc2",
+                "BB_SIGNATURE_HANDLER" : "basic",
+                "EXTRA_BBFILES": "${COREBASE}/recipes/fails-mc/*.bb",
+            }
+            cmd = ["bitbake", "mc:mc1:fails-mc2", "mc:mc2:fails-mc1"]
+            self.run_bitbakecmd(cmd, tempdir, "", extraenv=extraenv)
+
+    def test_multiconfig_mcdepends(self):
+        with tempfile.TemporaryDirectory(prefix="runqueuetest") as tempdir:
+            extraenv = {
+                "BBMULTICONFIG" : "mc1 mc2",
+                "BB_SIGNATURE_HANDLER" : "TestMulticonfigDepends",
+                "EXTRA_BBFILES": "${COREBASE}/recipes/fails-mc/*.bb",
+            }
+            tasks = self.run_bitbakecmd(["bitbake", "mc:mc1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
+            expected = ["mc1:f1:%s" % t for t in self.alltasks] + \
+                       ["mc2:a1:%s" % t for t in self.alltasks]
+            self.assertEqual(set(tasks), set(expected))
+
+            # A rebuild does nothing
+            tasks = self.run_bitbakecmd(["bitbake", "mc:mc1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
+            self.assertEqual(set(tasks), set())
+
+            # Test that a signature change in the dependent task causes
+            # mcdepends to rebuild
+            tasks = self.run_bitbakecmd(["bitbake", "mc:mc2:a1", "-c", "compile", "-f"], tempdir, "", extraenv=extraenv, cleanup=True)
+            expected = ["mc2:a1:compile"]
+            self.assertEqual(set(tasks), set(expected))
+
+            rerun_tasks = self.alltasks[:]
+            for x in ("fetch", "unpack", "patch", "prepare_recipe_sysroot", "configure", "compile"):
+                rerun_tasks.remove(x)
+            tasks = self.run_bitbakecmd(["bitbake", "mc:mc1:f1"], tempdir, "", extraenv=extraenv, cleanup=True)
+            expected = ["mc1:f1:%s" % t for t in rerun_tasks] + \
+                       ["mc2:a1:%s" % t for t in rerun_tasks]
+            self.assertEqual(set(tasks), set(expected))
 
     @unittest.skipIf(sys.version_info < (3, 5, 0), 'Python 3.5 or later required')
     def test_hashserv_single(self):
diff --git a/poky/bitbake/lib/bb/tinfoil.py b/poky/bitbake/lib/bb/tinfoil.py
index 8c9b6b8..dccbe0e 100644
--- a/poky/bitbake/lib/bb/tinfoil.py
+++ b/poky/bitbake/lib/bb/tinfoil.py
@@ -117,15 +117,16 @@
 
     class TinfoilCookerCollectionAdapter:
         """ cooker.collection adapter """
-        def __init__(self, tinfoil):
+        def __init__(self, tinfoil, mc=''):
             self.tinfoil = tinfoil
+            self.mc = mc
         def get_file_appends(self, fn):
-            return self.tinfoil.get_file_appends(fn)
+            return self.tinfoil.get_file_appends(fn, self.mc)
         def __getattr__(self, name):
             if name == 'overlayed':
-                return self.tinfoil.get_overlayed_recipes()
+                return self.tinfoil.get_overlayed_recipes(self.mc)
             elif name == 'bbappends':
-                return self.tinfoil.run_command('getAllAppends')
+                return self.tinfoil.run_command('getAllAppends', self.mc)
             else:
                 raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
 
@@ -185,10 +186,11 @@
 
     def __init__(self, tinfoil):
         self.tinfoil = tinfoil
-        self.collection = self.TinfoilCookerCollectionAdapter(tinfoil)
+        self.multiconfigs = [''] + (tinfoil.config_data.getVar('BBMULTICONFIG') or '').split()
+        self.collections = {}
         self.recipecaches = {}
-        self.recipecaches[''] = self.TinfoilRecipeCacheAdapter(tinfoil)
-        for mc in (tinfoil.config_data.getVar('BBMULTICONFIG') or '').split():
+        for mc in self.multiconfigs:
+            self.collections[mc] = self.TinfoilCookerCollectionAdapter(tinfoil, mc)
             self.recipecaches[mc] = self.TinfoilRecipeCacheAdapter(tinfoil, mc)
         self._cache = {}
     def __getattr__(self, name):
@@ -492,11 +494,11 @@
             raise Exception('Not connected to server (did you call .prepare()?)')
         return self.server_connection.events.waitEvent(timeout)
 
-    def get_overlayed_recipes(self):
+    def get_overlayed_recipes(self, mc=''):
         """
         Find recipes which are overlayed (i.e. where recipes exist in multiple layers)
         """
-        return defaultdict(list, self.run_command('getOverlayedRecipes'))
+        return defaultdict(list, self.run_command('getOverlayedRecipes', mc))
 
     def get_skipped_recipes(self):
         """
@@ -534,11 +536,11 @@
                 raise bb.providers.NoProvider('Unable to find any recipe file matching "%s"' % pn)
         return best[3]
 
-    def get_file_appends(self, fn):
+    def get_file_appends(self, fn, mc=''):
         """
         Find the bbappends for a recipe file
         """
-        return self.run_command('getFileAppends', fn)
+        return self.run_command('getFileAppends', fn, mc)
 
     def all_recipes(self, mc='', sort=True):
         """
diff --git a/poky/bitbake/lib/bblayers/action.py b/poky/bitbake/lib/bblayers/action.py
index d6459d6..5b78195 100644
--- a/poky/bitbake/lib/bblayers/action.py
+++ b/poky/bitbake/lib/bblayers/action.py
@@ -143,11 +143,12 @@
 
         applied_appends = []
         for layer in layers:
-            overlayed = []
-            for f in self.tinfoil.cooker.collection.overlayed.keys():
-                for of in self.tinfoil.cooker.collection.overlayed[f]:
-                    if of.startswith(layer):
-                        overlayed.append(of)
+            overlayed = set()
+            for mc in self.tinfoil.cooker.multiconfigs:
+                for f in self.tinfoil.cooker.collections[mc].overlayed.keys():
+                    for of in self.tinfoil.cooker.collections[mc].overlayed[f]:
+                        if of.startswith(layer):
+                            overlayed.add(of)
 
             logger.plain('Copying files from %s...' % layer )
             for root, dirs, files in os.walk(layer):
@@ -174,14 +175,21 @@
                                     logger.warning('Overwriting file %s', fdest)
                             bb.utils.copyfile(f1full, fdest)
                             if ext == '.bb':
-                                for append in self.tinfoil.cooker.collection.get_file_appends(f1full):
+                                appends = set()
+                                for mc in self.tinfoil.cooker.multiconfigs:
+                                    appends |= set(self.tinfoil.cooker.collections[mc].get_file_appends(f1full))
+                                for append in appends:
                                     if layer_path_match(append):
                                         logger.plain('  Applying append %s to %s' % (append, fdest))
                                         self.apply_append(append, fdest)
                                         applied_appends.append(append)
 
         # Take care of when some layers are excluded and yet we have included bbappends for those recipes
-        for b in self.tinfoil.cooker.collection.bbappends:
+        bbappends = set()
+        for mc in self.tinfoil.cooker.multiconfigs:
+            bbappends |= set(self.tinfoil.cooker.collections[mc].bbappends)
+
+        for b in bbappends:
             (recipename, appendname) = b
             if appendname not in applied_appends:
                 first_append = None
diff --git a/poky/bitbake/lib/bblayers/query.py b/poky/bitbake/lib/bblayers/query.py
index e2cc310..ee2db0e 100644
--- a/poky/bitbake/lib/bblayers/query.py
+++ b/poky/bitbake/lib/bblayers/query.py
@@ -320,12 +320,12 @@
     def get_appends_for_files(self, filenames):
         appended, notappended = [], []
         for filename in filenames:
-            _, cls, _ = bb.cache.virtualfn2realfn(filename)
+            _, cls, mc = bb.cache.virtualfn2realfn(filename)
             if cls:
                 continue
 
             basename = os.path.basename(filename)
-            appends = self.tinfoil.cooker.collection.get_file_appends(basename)
+            appends = self.tinfoil.cooker.collections[mc].get_file_appends(basename)
             if appends:
                 appended.append((basename, list(appends)))
             else: