poky: subtree update:23deb29c1b..c67f57c09e

Adrian Bunk (1):
      librsvg: Upgrade 2.40.20 -> 2.40.21

Alejandro Hernandez (1):
      musl: Upgrade to latest release 1.2.1

Alex Kiernan (8):
      systemd: Upgrade v245.6 -> v246
      systemd: Move musl patches to SRC_URI_MUSL
      systemd: Fix path to modules-load.d et al
      nfs-utils: Drop StandardError=syslog from systemd unit
      openssh: Drop StandardError=syslog from systemd unit
      volatile-binds: Drop StandardOutput=syslog from systemd unit
      systemd: Upgrade v246 -> v246.1
      systemd: Upgrade v246.1 -> v246.2

Alexander Kanavin (16):
      sysvinit: update 2.96 -> 2.97
      kbd: update 2.2.0 -> 2.3.0
      gnu-config: update to latest revision
      go: update 1.14.4 -> 1.14.6
      meson: update 0.54.3 -> 0.55.0
      nasm: update 2.14.02 -> 2.15.03
      glib-2.0: correct build with latest meson
      rsync: update 3.2.1 -> 3.2.2
      vala: update 0.48.6 -> 0.48.7
      logrotate: update 3.16.0 -> 3.17.0
      mesa: update 20.1.2 -> 20.1.4
      libcap: update 2.36 -> 2.41
      net-tools: fix upstream version check
      meson.bbclass: add a cups-config entry
      oeqa: write @OETestTag content into json test reports for each case
      libhandy: upstream has moved to gnome

Alistair Francis (1):
      binutils: Remove RISC-V PIE patch

Andrei Gherzan (2):
      initscripts: Fix various shellcheck warnings in populate-volatile.sh
      initscripts: Fix populate-volatile.sh bug when file/dir exists

Anuj Mittal (4):
      harfbuzz: upgrade 2.6.8 -> 2.7.1
      sqlite3: upgrade 3.32.3 -> 3.33.0
      stress-ng: upgrade 0.11.17 -> 0.11.18
      x264: upgrade to latest revision

Armin Kuster (1):
      glibc: Secruity fix for CVE-2020-6096

Bruce Ashfield (25):
      linux-yocto/5.4: update to v5.4.53
      linux-yocto/5.4: fix perf build with binutils 2.35
      kernel/yocto: allow dangling KERNEL_FEATURES
      linux-yocto/5.4: update to v5.4.54
      systemtap: update to 4.3 latest
      kernel-devsrc: fix x86 (32bit) on target module build
      lttng-modules: update to 2.12.2 (fixes v5.8+ builds)
      yocto-bsps: update reference BSPs to 5.4.54
      kernel-yocto: enhance configuration queue analysis capabilities
      strace: update to 5.8 (fix build against v5.8 uapi headers)
      linux-yocto-rt/5.4: update to rt32
      linux-yocto/5.4: update to v5.4.56
      linux-yocto/5.4: update to v5.4.57
      kernel-yocto: set cwd before querying the meta data dir
      kernel-yocto: make # is not set matching more precise
      kernel-yocto: split meta data gathering into patch and config phases
      make-mod-scripts: add HOSTCXX definitions and gmp-native dependency
      kernel-devsrc: fix on target modules prepare for ARM
      kernel-devsrc: 5.8 + gcc10 require gcc-plugins + libmpc-dev
      linux-yocto/5.4: update to v5.4.58
      linux-yocto/5.4: perf cs-etm: Move definition of 'traceid_list' global variable from header file
      libc-headers: update to v5.8
      linux-yocto: introduce 5.8 reference kernel
      kernel-yocto/5.8: add gmp-native dependency
      linux-yocto/5.8: update to v5.8.1

Chandana kalluri (1):
      qemu.inc: Use virtual/libgl instead of mesa

Changhyeok Bae (2):
      iproute2: upgrade 5.7.0 -> 5.8.0
      ethtool: upgrade 5.7 -> 5.8

Changqing Li (5):
      layer.conf: fix adwaita-icon-theme signature change problem
      gtk-icon-cache.bbclass: add features_check
      gcc-runtime.inc: fix m32 compile fail with x86-64 compiler
      libffi: fix multilib header conflict
      gpgme: fix multilib header conflict

Chen Qi (3):
      grub: set CVE_PRODUCT to grub2
      runqemu: fix permission check of /dev/vhost-net
      fribidi: extend CVE_PRODUCT to include fribidi

Chris Laplante (11):
      lib/oe/log_colorizer.py: add LogColorizerProxyProgressHandler
      bitbake: build: print traceback if progress handler can't be created
      bitbake: build: create_progress_handler: fix calling 'get' on NoneType
      bitbake: progress: modernize syntax, format
      bitbake: progress: fix hypothetical NameError if 'progress' isn't set
      bitbake: progress: filter ANSI escape codes before looking for progress text
      bitbake: tests/color: add test suite for ANSI color code filtering
      bitbake: data: emit filename/lineno information for shell functions
      bitbake: build: print a backtrace when a Bash shell function fails
      bitbake: build: print a backtrace with the original metadata locations of Bash shell funcs
      bitbake: build: make shell traps less chatty when 'bitbake -v' is used

Dan Callaghan (1):
      stress-ng: create a symlink for /usr/bin/stress

Daniel Ammann (1):
      wic: fix typo

Daniel Gomez (1):
      allarch: Add missing allarch ttf-bitstream-vera

Diego Sueiro (1):
      cml1: Add the option to choose the .config root dir

Dmitry Baryshkov (3):
      mesa: enable freedreno Vulkan driver if freedreno is enabled
      arch-armv8-2a.inc: add tune include for armv8.2a
      tune-cortexa55.inc: switch to using armv8.2a include file

Fredrik Gustafsson (13):
      package_manager: Move to package_manager/__init__.py
      rpm: Move manifest to its own subdir
      ipk: Move ipk manifest to its own subdir
      deb: Move deb manifest to its own subdir
      rpm: Move rootfs to its own dir
      ipk: Move rootfs to its own dir
      deb: Move rootfs to its own dir
      rpm: Move sdk to its own dir
      ipk: Move sdk to its own dir
      deb: Move sdk to its own dir
      rpm: Move package manager to its own dir
      ipk: Move package manager to its own dir
      deb: Move package manager to its own dir

Guillaume Champagne (1):
      weston: add missing packageconfigs

Jeremy Puhlman (1):
      gobject-introspection: disable scanner caching in install

Joe Slater (3):
      libdnf: allow reproducible binary builds
      gconf: use python3
      gcr: make sure gcr-oids.h is generated

Jonathan Richardson (1):
      cortex-m0plus.inc: Add tuning for cortex M0 plus

Joshua Watt (3):
      bitbake: bitbake: command: Handle multiconfig in findSigInfo
      lib/oe/reproducible.py: Fix git HEAD check
      perl: Add check for non-arch Storable.pm file

Khasim Mohammed (2):
      wic/bootimg-efi: Add support for IMAGE_BOOT_FILES
      wic/bootimg-efi: Update docs for IMAGE_BOOT_FILES support in bootimg-efi

Khem Raj (23):
      qemumips: Use 34Kf CPU emulation
      libunwind: Backport a fix for -fno-common option to compile
      dhcp: Use -fcommon compiler option
      inetutils: Fix build with -fno-common
      libomxil: Use -fcommon compiler option
      kexec-tools: Fix build with -fno-common
      distcc: Fix build with -fno-common
      libacpi: Fix build with -fno-common
      minicom: Fix build when using -fno-common
      binutils: Upgrade to 2.35 release
      xf86-video-intel: Fix build with -fno-common
      glibc: Upgrade to 2.32 release
      go: Upgrade to 1.14.7
      webkitgtk: Upgrade to 2.28.4
      kexec-tools: Fix additional duplicate symbols on aarch64/x86_64 builds
      gcc: Upgrade to 10.2.0
      buildcpio.py: Apply patch to fix build with -fno-common
      buildgalculator: Patch to fix build with -fno-common
      localedef: Update to include floatn.h fix
      xserver-xorg: Fix build with -fno-common/mips
      binutils: Let crosssdk gold linker generate 4096 btyes long .interp section
      gcc-cross-canadian: Correct the regexp to delete versioned gcc binary
      curl: Upgrade to 7.72.0

Konrad Weihmann (2):
      rootfs-post: remove traling blanks from tasks
      cve-update: handle baseMetricV2 as optional

Lee Chee Yang (4):
      buildhistory: use pid for temporary txt file name
      checklayer: check layer in BBLAYERS before test
      ghostscript: fix CVE-2020-15900
      qemu : fix CVE-2020-15863

Mark Hatle (1):
      package.bbclass: Sort shlib2 output for hash equivalency

Martin Jansa (2):
      net-tools: upgrade to latest revision in upstream repo instead of old debian snapshot
      perf: backport a fix for confusing non-fatal error

Matt Madison (1):
      cogl-1.0: correct X11 dependencies

Matthew (3):
      ltp: remove --with-power-management-testsuite from EXTRA_OECONF
      ltp: remove OOM tests from runtest/mm
      ltp: make copyFrom scp command non-fatal

Mikko Rapeli (2):
      alsa-topology-conf: use ${datadir} in do_install()
      alsa-ucm-conf: use ${datadir} in do_install()

Ming Liu (3):
      conf/machine: set UBOOT_MACHINE for qemumips and qemumips64
      multilib.conf: add u-boot to NON_MULTILIB_RECIPES
      libubootenv: uprev to v0.3

Mingli Yu (2):
      ccache: Upgrade to 3.7.11
      Revert "python3: define a profile directory path"

Naoto Yamaguchi (1):
      patch.py: Change to more strictly fuzz detection

Nathan Rossi (4):
      libexif: Enable native and nativesdk
      cmake.bbclass: Rework compiler program variables for allarch
      python3: Improve handling of python3 manifest generation
      python3-manifest.json: Updates

Oleksandr Kravchuk (9):
      python3-setuptools: update to 49.2.0
      bash-completion: update to 2.11
      python3: update to 3.8.5
      re2c: update to 2.0
      diffoscope: update to 153
      json-c: update to 0.15
      git: update 2.28.0
      libwpe: update to 1.7.1
      python3-setuptools: update to 49.3.1

Richard Purdie (20):
      perl: Avoid race continually rebuilding miniperl
      gcc: Fix mangled patch
      bitbake: server/process: Fix UI first connection tracking
      bitbake: server/process: Account for xmlrpc connections
      Revert "lib/oe/log_colorizer.py: add LogColorizerProxyProgressHandler"
      lib/package_manager: Fix missing imports
      populate_sdk_ext: Ensure buildtools doesn't corrupt OECORE_NATIVE_SYSROOT
      buildtools: Handle generic environment setup injection
      uninative: Handle PREMIRRORS generically
      maintainers: Update entries for Mark Hatle
      gcr: Fix patch Upstream-Status from v2 patch
      bitbake: server/process: Remove pointless process forking
      bitbake: server/process: Simplfy idle callback handler function
      bitbake: server/process: Pass timeout/xmlrpc parameters directly to the server
      bitbake: server/process: Add extra logfile flushing
      packagefeed-stability: Remove as obsolete
      build-compare: Drop recipe
      qemu: Upgrade 5.0.0 -> 5.1.0
      selftest/tinfoil: Increase wait event timeout
      lttng-tools: upgrade 2.12.1 -> 2.12.2

Ross Burton (3):
      popt: upgrade to 1.18
      conf/machine: set UBOOT_MACHINE for qemuarm and qemuarm64
      gcc: backport a fix for out-of-line atomics on aarch64

TeohJayShen (2):
      oeqa/manual/bsp-hw.json : remove shutdown_system test
      oeqa/manual/bsp-hw.json : remove X_server_can_start_up_with_runlevel_5_boot test

Trevor Gamblin (1):
      llvm: upgrade 9.0.1 -> 10.0.1

Tyler Hicks (1):
      kernel-devicetree: Fix intermittent build failures caused by DTB builds

Usama Arif (3):
      kernel-fitimage: build configuration for image tree when dtb is not present
      oeqa/selftest/imagefeatures: Add testcase for fitImage
      ref-manual: Add documentation for kernel-fitimage

Vasyl Vavrychuk (1):
      runqemu: Check gtk or sdl option is passed together with gl or gl-es options.

Yi Zhao (1):
      pbzip2: extend for nativesdk

Zhang Qiang (1):
      kernel.bbclass: Configuration for environment with HOSTCXX

hongxu (1):
      nativesdk-rpm: adjust RPM_CONFIGDIR paths dynamically

zangrc (8):
      libevdev:upgrade 1.9.0 -> 1.9.1
      mpg123:upgrade 1.26.2 -> 1.26.3
      flex: Refresh patch
      stress-ng:upgrade 0.11.15 -> 0.11.17
      sudo:upgrade 1.9.1 -> 1.9.2
      libcap: Upgrade 2.41 -> 2.42
      libinput: Upgrade 1.15.6 -> 1.16.0
      python3-setuptools: Upgrade 49.2.0 -> 49.2.1

Signed-off-by: Andrew Geissler <geissonator@yahoo.com>
Change-Id: Ic7fa1e8484c1c7722a70c75608aa4ab21fa7d755
diff --git a/poky/meta/lib/oe/package_manager/deb/__init__.py b/poky/meta/lib/oe/package_manager/deb/__init__.py
new file mode 100644
index 0000000..72155b1
--- /dev/null
+++ b/poky/meta/lib/oe/package_manager/deb/__init__.py
@@ -0,0 +1,492 @@
+#
+# SPDX-License-Identifier: GPL-2.0-only
+#
+
+import re
+import subprocess
+from oe.package_manager import *
+
+class DpkgIndexer(Indexer):
+    def _create_configs(self):
+        bb.utils.mkdirhier(self.apt_conf_dir)
+        bb.utils.mkdirhier(os.path.join(self.apt_conf_dir, "lists", "partial"))
+        bb.utils.mkdirhier(os.path.join(self.apt_conf_dir, "apt.conf.d"))
+        bb.utils.mkdirhier(os.path.join(self.apt_conf_dir, "preferences.d"))
+
+        with open(os.path.join(self.apt_conf_dir, "preferences"),
+                "w") as prefs_file:
+            pass
+        with open(os.path.join(self.apt_conf_dir, "sources.list"),
+                "w+") as sources_file:
+            pass
+
+        with open(self.apt_conf_file, "w") as apt_conf:
+            with open(os.path.join(self.d.expand("${STAGING_ETCDIR_NATIVE}"),
+                "apt", "apt.conf.sample")) as apt_conf_sample:
+                for line in apt_conf_sample.read().split("\n"):
+                    line = re.sub(r"#ROOTFS#", "/dev/null", line)
+                    line = re.sub(r"#APTCONF#", self.apt_conf_dir, line)
+                    apt_conf.write(line + "\n")
+
+    def write_index(self):
+        self.apt_conf_dir = os.path.join(self.d.expand("${APTCONF_TARGET}"),
+                "apt-ftparchive")
+        self.apt_conf_file = os.path.join(self.apt_conf_dir, "apt.conf")
+        self._create_configs()
+
+        os.environ['APT_CONFIG'] = self.apt_conf_file
+
+        pkg_archs = self.d.getVar('PACKAGE_ARCHS')
+        if pkg_archs is not None:
+            arch_list = pkg_archs.split()
+        sdk_pkg_archs = self.d.getVar('SDK_PACKAGE_ARCHS')
+        if sdk_pkg_archs is not None:
+            for a in sdk_pkg_archs.split():
+                if a not in pkg_archs:
+                    arch_list.append(a)
+
+        all_mlb_pkg_arch_list = (self.d.getVar('ALL_MULTILIB_PACKAGE_ARCHS') or "").split()
+        arch_list.extend(arch for arch in all_mlb_pkg_arch_list if arch not in arch_list)
+
+        apt_ftparchive = bb.utils.which(os.getenv('PATH'), "apt-ftparchive")
+        gzip = bb.utils.which(os.getenv('PATH'), "gzip")
+
+        index_cmds = []
+        deb_dirs_found = False
+        for arch in arch_list:
+            arch_dir = os.path.join(self.deploy_dir, arch)
+            if not os.path.isdir(arch_dir):
+                continue
+
+            cmd = "cd %s; PSEUDO_UNLOAD=1 %s packages . > Packages;" % (arch_dir, apt_ftparchive)
+
+            cmd += "%s -fcn Packages > Packages.gz;" % gzip
+
+            with open(os.path.join(arch_dir, "Release"), "w+") as release:
+                release.write("Label: %s\n" % arch)
+
+            cmd += "PSEUDO_UNLOAD=1 %s release . >> Release" % apt_ftparchive
+
+            index_cmds.append(cmd)
+
+            deb_dirs_found = True
+
+        if not deb_dirs_found:
+            bb.note("There are no packages in %s" % self.deploy_dir)
+            return
+
+        oe.utils.multiprocess_launch(create_index, index_cmds, self.d)
+        if self.d.getVar('PACKAGE_FEED_SIGN') == '1':
+            raise NotImplementedError('Package feed signing not implementd for dpkg')
+
+class DpkgPkgsList(PkgsList):
+
+    def list_pkgs(self):
+        cmd = [bb.utils.which(os.getenv('PATH'), "dpkg-query"),
+               "--admindir=%s/var/lib/dpkg" % self.rootfs_dir,
+               "-W"]
+
+        cmd.append("-f=Package: ${Package}\nArchitecture: ${PackageArch}\nVersion: ${Version}\nFile: ${Package}_${Version}_${Architecture}.deb\nDepends: ${Depends}\nRecommends: ${Recommends}\nProvides: ${Provides}\n\n")
+
+        try:
+            cmd_output = subprocess.check_output(cmd, stderr=subprocess.STDOUT).strip().decode("utf-8")
+        except subprocess.CalledProcessError as e:
+            bb.fatal("Cannot get the installed packages list. Command '%s' "
+                     "returned %d:\n%s" % (' '.join(cmd), e.returncode, e.output.decode("utf-8")))
+
+        return opkg_query(cmd_output)
+
+class OpkgDpkgPM(PackageManager):
+    def __init__(self, d, target_rootfs):
+        """
+        This is an abstract class. Do not instantiate this directly.
+        """
+        super(OpkgDpkgPM, self).__init__(d, target_rootfs)
+
+    def package_info(self, pkg, cmd):
+        """
+        Returns a dictionary with the package info.
+
+        This method extracts the common parts for Opkg and Dpkg
+        """
+
+        try:
+            output = subprocess.check_output(cmd, stderr=subprocess.STDOUT, shell=True).decode("utf-8")
+        except subprocess.CalledProcessError as e:
+            bb.fatal("Unable to list available packages. Command '%s' "
+                     "returned %d:\n%s" % (cmd, e.returncode, e.output.decode("utf-8")))
+        return opkg_query(output)
+
+    def extract(self, pkg, pkg_info):
+        """
+        Returns the path to a tmpdir where resides the contents of a package.
+
+        Deleting the tmpdir is responsability of the caller.
+
+        This method extracts the common parts for Opkg and Dpkg
+        """
+
+        ar_cmd = bb.utils.which(os.getenv("PATH"), "ar")
+        tar_cmd = bb.utils.which(os.getenv("PATH"), "tar")
+        pkg_path = pkg_info[pkg]["filepath"]
+
+        if not os.path.isfile(pkg_path):
+            bb.fatal("Unable to extract package for '%s'."
+                     "File %s doesn't exists" % (pkg, pkg_path))
+
+        tmp_dir = tempfile.mkdtemp()
+        current_dir = os.getcwd()
+        os.chdir(tmp_dir)
+        data_tar = 'data.tar.xz'
+
+        try:
+            cmd = [ar_cmd, 'x', pkg_path]
+            output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
+            cmd = [tar_cmd, 'xf', data_tar]
+            output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
+        except subprocess.CalledProcessError as e:
+            bb.utils.remove(tmp_dir, recurse=True)
+            bb.fatal("Unable to extract %s package. Command '%s' "
+                     "returned %d:\n%s" % (pkg_path, ' '.join(cmd), e.returncode, e.output.decode("utf-8")))
+        except OSError as e:
+            bb.utils.remove(tmp_dir, recurse=True)
+            bb.fatal("Unable to extract %s package. Command '%s' "
+                     "returned %d:\n%s at %s" % (pkg_path, ' '.join(cmd), e.errno, e.strerror, e.filename))
+
+        bb.note("Extracted %s to %s" % (pkg_path, tmp_dir))
+        bb.utils.remove(os.path.join(tmp_dir, "debian-binary"))
+        bb.utils.remove(os.path.join(tmp_dir, "control.tar.gz"))
+        os.chdir(current_dir)
+
+        return tmp_dir
+
+    def _handle_intercept_failure(self, registered_pkgs):
+        self.mark_packages("unpacked", registered_pkgs.split())
+
+class DpkgPM(OpkgDpkgPM):
+    def __init__(self, d, target_rootfs, archs, base_archs, apt_conf_dir=None, deb_repo_workdir="oe-rootfs-repo", filterbydependencies=True):
+        super(DpkgPM, self).__init__(d, target_rootfs)
+        self.deploy_dir = oe.path.join(self.d.getVar('WORKDIR'), deb_repo_workdir)
+
+        create_packages_dir(self.d, self.deploy_dir, d.getVar("DEPLOY_DIR_DEB"), "package_write_deb", filterbydependencies)
+
+        if apt_conf_dir is None:
+            self.apt_conf_dir = self.d.expand("${APTCONF_TARGET}/apt")
+        else:
+            self.apt_conf_dir = apt_conf_dir
+        self.apt_conf_file = os.path.join(self.apt_conf_dir, "apt.conf")
+        self.apt_get_cmd = bb.utils.which(os.getenv('PATH'), "apt-get")
+        self.apt_cache_cmd = bb.utils.which(os.getenv('PATH'), "apt-cache")
+
+        self.apt_args = d.getVar("APT_ARGS")
+
+        self.all_arch_list = archs.split()
+        all_mlb_pkg_arch_list = (self.d.getVar('ALL_MULTILIB_PACKAGE_ARCHS') or "").split()
+        self.all_arch_list.extend(arch for arch in all_mlb_pkg_arch_list if arch not in self.all_arch_list)
+
+        self._create_configs(archs, base_archs)
+
+        self.indexer = DpkgIndexer(self.d, self.deploy_dir)
+
+    def mark_packages(self, status_tag, packages=None):
+        """
+        This function will change a package's status in /var/lib/dpkg/status file.
+        If 'packages' is None then the new_status will be applied to all
+        packages
+        """
+        status_file = self.target_rootfs + "/var/lib/dpkg/status"
+
+        with open(status_file, "r") as sf:
+            with open(status_file + ".tmp", "w+") as tmp_sf:
+                if packages is None:
+                    tmp_sf.write(re.sub(r"Package: (.*?)\n((?:[^\n]+\n)*?)Status: (.*)(?:unpacked|installed)",
+                                        r"Package: \1\n\2Status: \3%s" % status_tag,
+                                        sf.read()))
+                else:
+                    if type(packages).__name__ != "list":
+                        raise TypeError("'packages' should be a list object")
+
+                    status = sf.read()
+                    for pkg in packages:
+                        status = re.sub(r"Package: %s\n((?:[^\n]+\n)*?)Status: (.*)(?:unpacked|installed)" % pkg,
+                                        r"Package: %s\n\1Status: \2%s" % (pkg, status_tag),
+                                        status)
+
+                    tmp_sf.write(status)
+
+        os.rename(status_file + ".tmp", status_file)
+
+    def run_pre_post_installs(self, package_name=None):
+        """
+        Run the pre/post installs for package "package_name". If package_name is
+        None, then run all pre/post install scriptlets.
+        """
+        info_dir = self.target_rootfs + "/var/lib/dpkg/info"
+        ControlScript = collections.namedtuple("ControlScript", ["suffix", "name", "argument"])
+        control_scripts = [
+                ControlScript(".preinst", "Preinstall", "install"),
+                ControlScript(".postinst", "Postinstall", "configure")]
+        status_file = self.target_rootfs + "/var/lib/dpkg/status"
+        installed_pkgs = []
+
+        with open(status_file, "r") as status:
+            for line in status.read().split('\n'):
+                m = re.match(r"^Package: (.*)", line)
+                if m is not None:
+                    installed_pkgs.append(m.group(1))
+
+        if package_name is not None and not package_name in installed_pkgs:
+            return
+
+        os.environ['D'] = self.target_rootfs
+        os.environ['OFFLINE_ROOT'] = self.target_rootfs
+        os.environ['IPKG_OFFLINE_ROOT'] = self.target_rootfs
+        os.environ['OPKG_OFFLINE_ROOT'] = self.target_rootfs
+        os.environ['INTERCEPT_DIR'] = self.intercepts_dir
+        os.environ['NATIVE_ROOT'] = self.d.getVar('STAGING_DIR_NATIVE')
+
+        for pkg_name in installed_pkgs:
+            for control_script in control_scripts:
+                p_full = os.path.join(info_dir, pkg_name + control_script.suffix)
+                if os.path.exists(p_full):
+                    try:
+                        bb.note("Executing %s for package: %s ..." %
+                                 (control_script.name.lower(), pkg_name))
+                        output = subprocess.check_output([p_full, control_script.argument],
+                                stderr=subprocess.STDOUT).decode("utf-8")
+                        bb.note(output)
+                    except subprocess.CalledProcessError as e:
+                        bb.warn("%s for package %s failed with %d:\n%s" %
+                                (control_script.name, pkg_name, e.returncode,
+                                    e.output.decode("utf-8")))
+                        failed_postinsts_abort([pkg_name], self.d.expand("${T}/log.do_${BB_CURRENTTASK}"))
+
+    def update(self):
+        os.environ['APT_CONFIG'] = self.apt_conf_file
+
+        self.deploy_dir_lock()
+
+        cmd = "%s update" % self.apt_get_cmd
+
+        try:
+            subprocess.check_output(cmd.split(), stderr=subprocess.STDOUT)
+        except subprocess.CalledProcessError as e:
+            bb.fatal("Unable to update the package index files. Command '%s' "
+                     "returned %d:\n%s" % (e.cmd, e.returncode, e.output.decode("utf-8")))
+
+        self.deploy_dir_unlock()
+
+    def install(self, pkgs, attempt_only=False):
+        if attempt_only and len(pkgs) == 0:
+            return
+
+        os.environ['APT_CONFIG'] = self.apt_conf_file
+
+        cmd = "%s %s install --force-yes --allow-unauthenticated --no-remove %s" % \
+              (self.apt_get_cmd, self.apt_args, ' '.join(pkgs))
+
+        try:
+            bb.note("Installing the following packages: %s" % ' '.join(pkgs))
+            subprocess.check_output(cmd.split(), stderr=subprocess.STDOUT)
+        except subprocess.CalledProcessError as e:
+            (bb.fatal, bb.warn)[attempt_only]("Unable to install packages. "
+                                              "Command '%s' returned %d:\n%s" %
+                                              (cmd, e.returncode, e.output.decode("utf-8")))
+
+        # rename *.dpkg-new files/dirs
+        for root, dirs, files in os.walk(self.target_rootfs):
+            for dir in dirs:
+                new_dir = re.sub(r"\.dpkg-new", "", dir)
+                if dir != new_dir:
+                    os.rename(os.path.join(root, dir),
+                              os.path.join(root, new_dir))
+
+            for file in files:
+                new_file = re.sub(r"\.dpkg-new", "", file)
+                if file != new_file:
+                    os.rename(os.path.join(root, file),
+                              os.path.join(root, new_file))
+
+
+    def remove(self, pkgs, with_dependencies=True):
+        if not pkgs:
+            return
+
+        if with_dependencies:
+            os.environ['APT_CONFIG'] = self.apt_conf_file
+            cmd = "%s purge %s" % (self.apt_get_cmd, ' '.join(pkgs))
+        else:
+            cmd = "%s --admindir=%s/var/lib/dpkg --instdir=%s" \
+                  " -P --force-depends %s" % \
+                  (bb.utils.which(os.getenv('PATH'), "dpkg"),
+                   self.target_rootfs, self.target_rootfs, ' '.join(pkgs))
+
+        try:
+            subprocess.check_output(cmd.split(), stderr=subprocess.STDOUT)
+        except subprocess.CalledProcessError as e:
+            bb.fatal("Unable to remove packages. Command '%s' "
+                     "returned %d:\n%s" % (e.cmd, e.returncode, e.output.decode("utf-8")))
+
+    def write_index(self):
+        self.deploy_dir_lock()
+
+        result = self.indexer.write_index()
+
+        self.deploy_dir_unlock()
+
+        if result is not None:
+            bb.fatal(result)
+
+    def insert_feeds_uris(self, feed_uris, feed_base_paths, feed_archs):
+        if feed_uris == "":
+            return
+
+        sources_conf = os.path.join("%s/etc/apt/sources.list"
+                                    % self.target_rootfs)
+        arch_list = []
+
+        if feed_archs is None:
+            for arch in self.all_arch_list:
+                if not os.path.exists(os.path.join(self.deploy_dir, arch)):
+                    continue
+                arch_list.append(arch)
+        else:
+            arch_list = feed_archs.split()
+
+        feed_uris = self.construct_uris(feed_uris.split(), feed_base_paths.split())
+
+        with open(sources_conf, "w+") as sources_file:
+            for uri in feed_uris:
+                if arch_list:
+                    for arch in arch_list:
+                        bb.note('Adding dpkg channel at (%s)' % uri)
+                        sources_file.write("deb %s/%s ./\n" %
+                                           (uri, arch))
+                else:
+                    bb.note('Adding dpkg channel at (%s)' % uri)
+                    sources_file.write("deb %s ./\n" % uri)
+
+    def _create_configs(self, archs, base_archs):
+        base_archs = re.sub(r"_", r"-", base_archs)
+
+        if os.path.exists(self.apt_conf_dir):
+            bb.utils.remove(self.apt_conf_dir, True)
+
+        bb.utils.mkdirhier(self.apt_conf_dir)
+        bb.utils.mkdirhier(self.apt_conf_dir + "/lists/partial/")
+        bb.utils.mkdirhier(self.apt_conf_dir + "/apt.conf.d/")
+        bb.utils.mkdirhier(self.apt_conf_dir + "/preferences.d/")
+
+        arch_list = []
+        for arch in self.all_arch_list:
+            if not os.path.exists(os.path.join(self.deploy_dir, arch)):
+                continue
+            arch_list.append(arch)
+
+        with open(os.path.join(self.apt_conf_dir, "preferences"), "w+") as prefs_file:
+            priority = 801
+            for arch in arch_list:
+                prefs_file.write(
+                    "Package: *\n"
+                    "Pin: release l=%s\n"
+                    "Pin-Priority: %d\n\n" % (arch, priority))
+
+                priority += 5
+
+            pkg_exclude = self.d.getVar('PACKAGE_EXCLUDE') or ""
+            for pkg in pkg_exclude.split():
+                prefs_file.write(
+                    "Package: %s\n"
+                    "Pin: release *\n"
+                    "Pin-Priority: -1\n\n" % pkg)
+
+        arch_list.reverse()
+
+        with open(os.path.join(self.apt_conf_dir, "sources.list"), "w+") as sources_file:
+            for arch in arch_list:
+                sources_file.write("deb file:%s/ ./\n" %
+                                   os.path.join(self.deploy_dir, arch))
+
+        base_arch_list = base_archs.split()
+        multilib_variants = self.d.getVar("MULTILIB_VARIANTS");
+        for variant in multilib_variants.split():
+            localdata = bb.data.createCopy(self.d)
+            variant_tune = localdata.getVar("DEFAULTTUNE_virtclass-multilib-" + variant, False)
+            orig_arch = localdata.getVar("DPKG_ARCH")
+            localdata.setVar("DEFAULTTUNE", variant_tune)
+            variant_arch = localdata.getVar("DPKG_ARCH")
+            if variant_arch not in base_arch_list:
+                base_arch_list.append(variant_arch)
+
+        with open(self.apt_conf_file, "w+") as apt_conf:
+            with open(self.d.expand("${STAGING_ETCDIR_NATIVE}/apt/apt.conf.sample")) as apt_conf_sample:
+                for line in apt_conf_sample.read().split("\n"):
+                    match_arch = re.match(r"  Architecture \".*\";$", line)
+                    architectures = ""
+                    if match_arch:
+                        for base_arch in base_arch_list:
+                            architectures += "\"%s\";" % base_arch
+                        apt_conf.write("  Architectures {%s};\n" % architectures);
+                        apt_conf.write("  Architecture \"%s\";\n" % base_archs)
+                    else:
+                        line = re.sub(r"#ROOTFS#", self.target_rootfs, line)
+                        line = re.sub(r"#APTCONF#", self.apt_conf_dir, line)
+                        apt_conf.write(line + "\n")
+
+        target_dpkg_dir = "%s/var/lib/dpkg" % self.target_rootfs
+        bb.utils.mkdirhier(os.path.join(target_dpkg_dir, "info"))
+
+        bb.utils.mkdirhier(os.path.join(target_dpkg_dir, "updates"))
+
+        if not os.path.exists(os.path.join(target_dpkg_dir, "status")):
+            open(os.path.join(target_dpkg_dir, "status"), "w+").close()
+        if not os.path.exists(os.path.join(target_dpkg_dir, "available")):
+            open(os.path.join(target_dpkg_dir, "available"), "w+").close()
+
+    def remove_packaging_data(self):
+        bb.utils.remove(self.target_rootfs + self.d.getVar('opkglibdir'), True)
+        bb.utils.remove(self.target_rootfs + "/var/lib/dpkg/", True)
+
+    def fix_broken_dependencies(self):
+        os.environ['APT_CONFIG'] = self.apt_conf_file
+
+        cmd = "%s %s --allow-unauthenticated -f install" % (self.apt_get_cmd, self.apt_args)
+
+        try:
+            subprocess.check_output(cmd.split(), stderr=subprocess.STDOUT)
+        except subprocess.CalledProcessError as e:
+            bb.fatal("Cannot fix broken dependencies. Command '%s' "
+                     "returned %d:\n%s" % (cmd, e.returncode, e.output.decode("utf-8")))
+
+    def list_installed(self):
+        return DpkgPkgsList(self.d, self.target_rootfs).list_pkgs()
+
+    def package_info(self, pkg):
+        """
+        Returns a dictionary with the package info.
+        """
+        cmd = "%s show %s" % (self.apt_cache_cmd, pkg)
+        pkg_info = super(DpkgPM, self).package_info(pkg, cmd)
+
+        pkg_arch = pkg_info[pkg]["pkgarch"]
+        pkg_filename = pkg_info[pkg]["filename"]
+        pkg_info[pkg]["filepath"] = \
+                os.path.join(self.deploy_dir, pkg_arch, pkg_filename)
+
+        return pkg_info
+
+    def extract(self, pkg):
+        """
+        Returns the path to a tmpdir where resides the contents of a package.
+
+        Deleting the tmpdir is responsability of the caller.
+        """
+        pkg_info = self.package_info(pkg)
+        if not pkg_info:
+            bb.fatal("Unable to get information for package '%s' while "
+                     "trying to extract the package."  % pkg)
+
+        tmp_dir = super(DpkgPM, self).extract(pkg, pkg_info)
+        bb.utils.remove(os.path.join(tmp_dir, "data.tar.xz"))
+
+        return tmp_dir